#however when it is subject to the constraints of its purpose as a machine
Explore tagged Tumblr posts
Text
I think this part is truly the most damning:

If it's all pre-rendered mush and it's "too expensive to fully experiment or explore" then such AI is not a valid artistic medium. It's entirely deterministic, like a pseudorandom number generator. The goal here is optimizing the rapid generation of an enormous quantity of low-quality images which fulfill the expectations put forth by The Prompt.
It's the modern technological equivalent of a circus automaton "painting" a canvas to be sold in the gift shop.




so a huge list of artists that was used to train midjourney’s model got leaked and i’m on it
literally there is no reason to support AI generators, they can’t ethically exist. my art has been used to train every single major one without consent lmfao 🤪
link to the archive
#to be clear AI as a concept has the power to create some truly fantastic images#however when it is subject to the constraints of its purpose as a machine#it is only capable of performing as its puppeteer wills it#and these puppeteers have the intention of stealing#tech#technology#tech regulation#big tech#data harvesting#data#technological developments#artificial intelligence#ai#machine generated content#machine learning#intellectual property#copyright
37K notes
·
View notes
Text
TAFAKKUR: Part 330
OLFACTION: SENSING THE SCENTS: Part 2
E-NOSE TECHNOLOGIES
The sensor is the e-nose's key element, and the sensor type is its defining characteristic. There are 5 types of e-nose sensors, as follows:
Optical sensors: Optical fiber sensors work through fluorescence and chemoluminescence. The tube's glass fibers contain a thin encoated active material in their sides and at both ends. As VOCs interact with the organic matrix's chemical dyes, the dye's fluorescent emission changes the spectrum. These changes then are measured and recorded for different odorous particles.
Fiber arrays with different dye mixtures can be used as sensors. These are fabricated by dipcoating (binding a plastic solution to a substrate), micro electromechanical system (MEMS), and precision machining. The main advantage is that this adjustable tool can filter out noise. Also, since many dye forms are available in biological research, sensors are cheap and easy to fabricate. But the instrumentation control systems are complex, which adds to the cost, and have a limited lifetime due to photo bleaching (the sensing process slowly consumes the fluorescent dyes).
Optical sensors are sensitive and can measure low ppb (parts per billion); however, they are still in the researach stage of development
Spectrometry-based Sensors: This group consists of a molecular spectrum-based gas chromatography (GC), an atomic mass spectrum-based mass spectrometry (MS), and a transmitted light spectrum-based light spectrum (LS). The first two can analyze the odor's components accurately, which is a plus. However, their use of a vapor trap to increase concentration can alter the odor's characteristics. LS devices do not consume the sample, but do require tunable quantum-well devices. GC and MS devices are commercially available, while LS devices are only at the research stage. All spectrometry-based sensors are fabricated by MEMS and precision machining, and can measure odors to a low ppb level.
The GC tube decomposes the odorant into its molecular constituents, and MS forms a mass spectrum for each peak. The spectra then is compared to a large precompiled database of spectral peaks to classify and identify odorants.
MOSFET (Metal-oxide-silicon field-effect-transistor): The basic principle here is capacitive charge coupling. In other words, VOCs react with the catalytic metal and thereby alter the device's electrical properties. The device's selectivity and sensitivity can be fine-tuned by varying the metal catalyst's thickness and composition. MOSFETs are micro-fabricated and commercially available, but can measure only parts per million. They can be manufactured by electronic interface circuits, which minimizes batch-to-batch variation. However, the gas produced by the VOC-metal reaction must penetrate the MOSFET's gate.
Conductivity Sensors: The sensor types used here are metal oxide or conducting polymer. Both operate on the principle of conductivity, for their resistance changes as they interact with VOCs. Metal oxide sensors are common, commercially available, inexpensive, and easy to produce (they are micro-fabricated). Their sensitivity ranges from 5-500 ppm. However, they only operate at high temperatures (200°C to 400°C).
In conducting polymer sensors, VOCs bond with the polymer backbone and change the polymer's conductivity (resistance). They are micro-fabricated together with electroplating and screen printing, are commercially available, and can measure from .1 to 100 ppm. They operate at room temperature, yet are very sensitive to humidity. Moreover, it is hard to electropolymerize the active material, which makes batch-to-batch variation inevitable. Sometimes VOCs penetrate the polymer chain, which means that the sensor must be returned to its neutral and reference state-a very time-consuming process.
Piezoelectric Sensors: These devices, which measure any change in mass, come in two varieties: quartz crystal microbalance (QCM) and surface acoustic wave (SAW) devices.
QCM sensors have a resonating disk and metal electrodes on each side. While applying the gas sample to the resonator's surface, the polymer surface absorbs VOCs from the environment. Thus its mass increases, which increases resonance frequency. As the U.S. Navy has long used QCMs, this technology is familiar, developed, and commercially available. A QCM sensor is fabricated by screen-printing, wire bonding, and MEMS. Althoug it can measure a 1.0 Ng mass change, its MEMS fabrication and interface electronics is a major disadvantages. QCM sensors are quite linear in mass changes, their sensitivity to temperature can be adjusted, and their response to water can vary for the material used.
MEMS techniques should be handled carefully, for the surface-to-volume ratio increases drastically as dimensions approach the micrometer levels. Measurement accuracy is lost when the increasing surface-to-volume ratio begins to degrade the signal-to-noise ratio. This problem occurs in most micro-fabricated devices. SAW devices have much higher frequencies. Since 3-D MEMS processing is unnecessary, SAW devices are cheaper. As with QCM devices, many polymer coatings are available. The differential devices can be quite sensitive. However, interface electronics require more complex electronics than those of conductivity sensors for both QCM and SAW sensors. Also, as the active membrane ages, resonance frequencies can drift and so must be detected for frequency by time. SAW devices are commercially available and sensitive to mass changes at the 1.0 pg level.
PATTERN RECOGNITION
Any e-nose's primary task is to identify an odorant and perhaps measure its concentration. After the signal processing step comes the crucial step of pattern recognition: preprocessing, feature extraction, classification, and decision-making. A database of odors must be formed for comparison purposes.
Preprocessing accounts for sensor drifts and reduces sample-to-sample variation. This can be done by normalizing sensor response ranges, manipulating sensor baselines, and compressing sensor transients.
Feature extraction involves dimensionality reduction, a crucial step for statistical data analysis, since the database's examples usually are subject to financial constraints. The higher dimensionality caused by sensor arrays is reduced to relevant pattern-recognition information and thus extracts only significant data. As most dimensions are correlated and dependent, it is better to reduce dimensionality to a few informative axes.
Feature extraction usually is accomplished by classical principal component analysis (PGA) or linear discriminant analysis (LDA). PCA is a linear transformation that finds the maximum variance projections and the most widely used technique for feature extraction. But as PCA ignores class labels, it is not an optimal technique for odor recognition.
LDA seeks to maximize the distance between class label examples and minimize the within distance, and thus is a more appropriate approach. LDA is also a linear transformation. For instance, LDA might better discriminate subtle but crucial odor projections, whereas PCA can remove the high variance random noise in a projection.
The classification stage identifies odors. Classical classification techniques are KNN (k nearest neighbors), Bayesian classifiers, and ANN (artificial neural networks]. KNN with, say, 5 nearest points will find the 5 closest matches from the precompiled database. The closest match will be assigned as the tested material's odorant class.
Bayesian classifiers first assign a posterior probability to the classes in the lower dimension and then pick the class that maximizes the predetermined probability distribution. ANN is closer to biological odor recognition. After being trained by the odor database, it is exposed to the unknown odorant in order to recognize the largest applicable response odorant class. The classifier estimates the class and places a confidence level on it.
In decision-making, risks and application-specific knowledge are considered in order to modify the classification. All decisions are reported-even a nonmatch.
CONCLUSION
As this article indicates, we can expect great progress in this area. And with each step forward, science and technology will continue to point toward the Greatest Artist's most subtle designs and allow us to appreciate them better.
#allah#god#prophet#Muhammad#quran#ayah#islam#muslim#muslimah#help#hijab#revert#convert#religion#reminder#hadith#sunnah#dua#salah#pray#prayer#welcome to islam#how to convert to islam#new convert#new muslim#new revert#revert help#convert help#islam help#muslim help
1 note
·
View note
Text
Review: The Bhagavad Gita & Personal Choice

At once a down-to-earth narrative and a multifaceted spiritual drama, the Gita bears a concrete timelessness, with its magnetism lying in the fact that it is a work not of religious dogma, but of personal choice.
For years, the Bhagavad Gita has distinguished itself as a masterpiece of spiritual and philosophical scripture. Beautifully condensing Upanishadic knowledge, the Gita traverses a number of subjects – from duty as a means toward liberation (moksha), to the risks of losing oneself to worldly temptations, to the dichotomy between the lower self (jiva) and the ultimate, eternal Self (atman) – as well as detailing a vast spectrum of human desires, treating them not as one-dimensional abstractions, but as the complex combinatorial dilemmas they are in real life. Written in the style of bardic poems, the Gita bears a concrete timelessness, with its magnetism lying in the fact that it is a work not of religious dogma, but of personal choice.
Recounting the dialogue between Arjuna, the Pandava warrior-prince, and Lord Krishna, his godling charioteer, the text covers a wide swath of Vedantic concepts which are then left for Arjuna to either follow or reject. This element of personal autonomy, and how it can lead to self-acceptance, environmental mastery and finally the spiritual path to one's true destiny, is an immensely alluring concept for readers today. But Arjuna's positioning is the real crux of the Gita: his conflicts between personal desire and sacred duty are the undercurrent of the tale, and the text offers equally spiritual and practical insights in every nuance.
The Gita begins with two families torn into different factions and preparing for battle. The sage Vyasa, who possesses the gift of divine vision, offers to loan the blind King, Dhritarashtra, his ability so the King may watch the battle. However, Dhritarashtra declines, having no wish to witness the carnage – particularly since his sons, the Kauravas, are arrayed on the battlefield. Instead, the sage confers his powers to Sanjaya, one of Dhritarashtra's counselors, who faithfully recounts the sequence of events as they unfold. From the start, readers' introduction to the Gita is almost sensory, with the battlefield stirred into action by Bhishma, who blows his conch horn and unleashes an uproarious war-frenzy, "conches and kettledrums, cymbals, tabors and trumpets ... the tumult echoed through heaven and earth... weapons were ready to clash." These descriptions serve as marked contrasts to the dialogic exchange that follows between Arjuna and Krishna, which is serene and private in tone, the two characters wearing the fabric of intimate friendship effortlessly as they are lifted out of the narrative, suspended as if in an aether where the concept of time becomes meaningless.
Arjuna – whose questions carry readers through the text – stands with Krishna in the heart of the battlefield, between the two armies. However, when he sees the enemy arrayed before him, "fathers, grandfathers, teachers, uncles, brothers, sons, grandsons and friends," he falls into the grip of a moral paralysis. His whole body trembles and his sacred bow, Gandiva, slips from his hands. He tells Krishna, "I see omens of chaos ... I see no good in killing my kinsmen in battle... we have heard that a place in hell is reserved for men who undermine family duties." As Jacob Neusner and Bruce Chilton remark in the book, Altruism in World Religions, "To fight his own family, Arjuna realizes, will violate a central tenet of his code of conduct: family loyalty, a principle of dharma." The concept of dharma holds an integral place in Hindu-Vedantic ethos, with Sanatana Dharma (eternal and universal dharma) regarded as a sacred duty applicable to all, and Swadharma (personal and particular dharma) sometimes coming into conflict with the former. It is this inherent contradiction that catalyzes Arjuna's self-doubt. "The flaw of pity blights my very being; conflicting sacred duties confound my reason."
It is Krishna who must inspire him to fight, through comprehensive teachings in the essentials of birth and rebirth, duty and destiny, action and inertia. There is an allegorical genius here that will appeal tremendously to readers. The military aspects of the Gita can easily serve as metaphors for not just external real-life battles, but internal battles of the self, with the two armies representing the conflict between the good and evil forces within each of us. In that sense, Krishna's advice to Arjuna – the seven-hundred slokas – becomes a pertinent, pragmatic guide to human affairs. With each verse, both Arjuna and the readers are offered perspectives and practices which, if followed, can allow them to achieve a robust understanding of reality.
With the spiritual underpinnings of the wisdom known as Sankhya, Krishna explains different yogas, or disciplines. Readers slowly begin to encounter all the components of humanity and the universe, through the lens of Arjuna, whose moral and spiritual weltanschauung undergoes a gradual metamorphosis – from, "If you think understanding is more powerful than action, why, Krishna, do you urge me to this horrific act? You confuse my understanding with a maze of words..." to "Krishna, my delusion is destroyed... I stand here, my doubt dispelled, ready to act on your words."
We are introduced in slow but mesmerizing detail to the wisdom within Arjuna himself; an omniscience that eluded him because it was hidden beneath illusion, or maya. Indeed, Krishna makes it clear that the very essence of maya is to conceal the Self – the atman – from human understanding by introducing the fallacy of separation, luring individuals with the promise that enlightenment springs not from within but from worldly accouterments: in sensual attraction, in the enticements of wealth and power. However, Krishna makes it clear that the realm of the senses, the physical world, is impermanent, and always in flux. Whereas he, the supreme manifestation of the divine and the earthly, the past, present and future, is there in all things, unchanging. "All creatures are bewildered at birth by the delusion of opposing dualities that arise from desire and hatred."
Although these themes are repeated often throughout the Gita, in myriad ways, not once do they become tiresome. Although Krishna's role in much of the Mahabharata is that of a Machiavellian trickster, invested in his own mysterious agenda, he does eventually reveal himself to Arjuna as the omniscient deity. Yet never once does he coerce Arjuna into accepting his teachings, though they are woven inextricably and dazzlingly through the entirety of the Gita. Rather, he gives Arjuna the choice to sift through layers of self-delusion and find his true Self. This can be achieved neither through passive inertia, nor through power-hungry action, but through the resolute fulfillment of duty that is its own reward. In order to dissolve the Self, the atman, into Brahman and achieve moksha, it is necessary to fight all that is mere illusory temptation. Just as Krishna promises Arjuna, victory is within reach, precisely because as a Kshatriya-warrior, it is his sacred duty – his destiny – to fight the battle. More than that, the desire to act righteously is his fundamental nature; the rest is pretense and self-delusion. "You are bound by your own action, intrinsic to your being ... the lord resides in the heart of all creatures, making them reel magically, as if a machine moved them."
Although the issues that Arjuna grapples with often become metaphysical speculations, never once does it dehumanize his character. His very conflict between the vacillations of the self and sacred duty assure his position as something greater and more complex than a mere widget fulfilling Krishna's agenda. It is through the essence of Arjuna's conflict that he grows on a personal and spiritual level. Conflict so personal and timeless is inextricably tied to choice. In Arjuna's case, the decision to shed the constraints of temporal insecurities and ascend toward his higher Self – freed from the weight of futile self-doubt and petty distractions – rests entirely in his hands. Krishna aids him through his psychospiritual journey not with a lightning-bolt of instantaneous comprehension, but through a slow unraveling of illusions so that Arjuna will arrive at a loftier vantage, able to reconnect with his true Self, and to remember his sacred duty. The answers are already within him; the very purpose of Krishna's counsel is merely to draw it out. "Armed with his purified knowledge, subduing the self with resolve, relinquishing sensuous objects, avoiding attraction and hatred... unposessive, tranquil, he is at one with the infinite spirit."
At its core, the Bhagavad Gita is timelessly insightful and life-affirmingly human, an epic that illustrates the discomfiting truths and moral dilemmas that continue to haunt modern-day readers. Despite its martial setting, it is fueled not by the atrocities of battle, but overflowing with the wisdoms of devotion, duty and love. Its protagonist is inclined by lingering personal attachments, but compelled by godly counsel, to surpass both the narrow private restrictions of self-doubt and the broad social framework of family, in order to reconnect with his pure, transcendental Self. However the Gita does not offer its teachings as rigid doctrine, but as a gentle framework through which readers can achieve a fresh perspective on the essential struggles of humankind. At once a down-to-earth narrative and a multifaceted spiritual drama, the Gita bears a concrete timelessness, with its magnetism lying in the fact that it is a work not of religious dogma, but of personal choice.
1 note
·
View note
Text
The social acceleration landscape and digital capitalism are both promoted, and the trend of "accelerationism" emerges at the historic moment.
If social acceleration is a frequency perception and accelerating society is a cognitive framework for social acceleration, then accelerationism is a theoretical proposition with a distinct attitude.
Although the perception of speed has always accompanied the process of modernization, philosophy and modernity criticism have traditionally not been optimistic about its success, but concerned about it. The dichotomy constructed by modern social theory is that machine, technology and speed are on one side, while humanity, culture and soul are on the other. The former has squeezed and destroyed the latter, resulting in human alienation. This point is still evident in the criticism of social acceleration logic by German scholar Hartmut Rosa.
Accelerationism eliminates such antagonism and tension. It holds that capitalism enslaves technology and science, and that the potential productive forces should be liberated, and all the science and technology developed in capitalist society should be used to accelerate the process of technological development, thus triggering social struggles and realizing the blueprint of post-capitalism. The Accelerationist Manifesto, which emerged in 2013, stands out as an anti-neoliberal platform of the Western radical left.
It is generally believed that the accelerationist trend of thought has a historical connection with the left-wing thought in France in the 1970s. After the failure of the "May Tempest" in 1968, French thinkers argued that they should move further in the direction of market movements than capitalism. In the 1990s Nick Land, a British philosopher, argued that politics was out of date and that confrontational tactics should be abandoned and the logic of capitalism accelerated until humans became a drag on intelligence on the planet.
That is to say, accelerationism essentially has two wings. The left's accelerationist vision is to bury capitalism through technology. The right's accelerationist vision is that technology and capital are naturally combined and that all constraints should be lifted to achieve infinite acceleration. The accelerationists of the left took capitalism as their opponent, arguing that unblocking technology would lead to its collapse and the birth of a new form of human society. Right-wing accelerationism shares the economic logic of capitalism by giving technology unlimited space, with the result that humans may be eliminated as "backward productive forces" but "superhumans" continue to operate on the profit track.
Like Liotard's "libido economics", left-wing accelerationism, though with a vision of a new society, accelerates into a "suicide attack" risky game. After the failure of May Storm, Lyotta believed that people could no longer find forms of fulfillment outside the libido, and that workers could only experience pleasure in using their bodies frantically. The accelerationists of the left argue that "we are betting on the untapped transformative potential of scientific research". Most people may not question this today, but how can we believe that the transformative potential that has been unleashed has destroyed only the capitalist order rather than the entire society, and that from the ruins there is a chance for people to rebuild? Will the capitalist system, or ecology, climate, resources and humanity, be the first to be destroyed by infinite acceleration?
When we say that science and technology is the primary productive force, the implicit premise is that it is used in accordance with human purposes, otherwise science and technology may be the primary destructive force. Today, this concern is not unfounded. Productivity is the core of people, is laborer main body, to accelerate the people has formed great extrusion conditions, further speed up more and more depend on the technology itself, then accelerate socialist may be brought about by the people, can only adapt to the scale of the technology, people will be left out in the social process, at least from control evacuation process. In the future prospect of accelerationism, the "post-capitalist society" is not envisaged, and human freedom, liberation and all-round development are not the subject. As far as social ideals are concerned, the left-wing accelerationism is not even a pie painting. The struggle strategies envisaged by it, such as the establishment of knowledge base, mass control of the media, cohesion of the scattered proletariat, and the proposal of social and technological leadership, are actually weak and weak. It is also in danger of being absorbed by the right-wing accelerationism and becoming a supporting role.
As for right-wing accelerationism, it advocates the "technology-capital alliance". Therefore, even though it believes that the existing capitalist system restricts the development of technology, it does not intend to deny the logic of capitalism, but believes that the acceleration of technology is bound to produce the strongest, that is, the best social system. What human beings pursue should not be "meaningless affairs" such as equality, democracy and pluralism, but the realization of technology-capital. Even if human beings are replaced, it is natural and reasonable. In this way, right-wing accelerationism completely abandons "human being is the end", and puts technology in the position of noumenon, and capital becomes the guarantee condition of the noumenon of technology.
The two classical cognitive constructs of time and space have been disintegrated into "acceleration" by modern technology, and the real experience of time and space has been displaced into digital experience. Faced with this rapid change, modern philosophy and theory are beginning to lose ground. Technology anxiety is receding from the mainstream. Technology criticism, cultural criticism, class analysis, capital crusade, etc., are all becoming less vocal. While technology is showing great unpredictability, blind expectations of good things are on the rise, and accelerationism can be seen as a typical example. It is worth noting that accelerate socialist envisioned the future, with the existing capitalist order of subversion, but people did not promised will be the main body status, and free to build in the development of social relations, but have been promised a chance to rebuild the social, even been promised a replaced by "the superhuman" outlook.
Accelerationism takes Marx's "machine theory fragment" as its theoretical source. The so-called "Machine Theory Fragment" appeared in the Grammy of the Grundrisse of Political Economy. Marx wrote: "The development of fixed capital shows the extent to which general social knowledge has become a direct productive force, and thus the extent to which the conditions of the social process of life are themselves controlled by and modified in accordance with general intelligence." Automatic machine as a general knowledge into the society, make the living labor secondary link in production, the scale and the source of labor time is wealth, and technology make the wealth does not depend on people's labor time, but not out of capitalist production relations, the machine enter may be knocked out, and not necessarily the increase of free time.
On the whole, Marx has always been concerned about the liberation and free development of human beings. The relationship between technology, labor and capital has been constantly clarified and completed in Das Kapital. Accelerationism, however, is not so much a theory of how man achieves freedom and liberation under technological conditions as a theory of how man should ensure that technology achieves its goals. Frankly, on both the accelerationist left and the accelerationist right, what I see is an ardent argument for making technology happen, for making it accelerate even further, but it's hard to see a deep concern for human development.
1 note
·
View note
Text
Classic car insurance in Texas, USA with 40 % off
People have difficulty controlling their excitement when having a ride in a vintage car with a sunroof during afternoons in December. Yelling and shouting, taking it to the coastal drive in the company of best friends, will surely leave behind some joyful moments of their lives. But, don’t let this lovely time become your nightmare with the intervention of traffic police. Once check your policy papers for classic auto insurance in Texas.
Is your policy expired?
Are you missing your liability cover?
Are you carrying the right policy for your classic automobile?
If not meeting any of these conditions, perhaps you will be the subject of legal abduction. Hey, are you scared? Relax, at least we can help you in avoiding such circumstances. Step on the brakes of your running mind and read this post in detail.
How contracting is the scenario while shopping for classic auto insurance?
Definitely, you are a lucky car owner as it’s not only a vintage old school vehicle in your garage; it’s a diamond. Unlike ordinary cars, which usually depreciate, classic automobiles add on their cost with time and maintenance. Besides this paradoxical contrast, some other chief differences can be the worthy factors for your policy’s claim.

Be cool and pay a lower premium on driving your old antique machine
Contrarily regular vehicles that run on the road daily, classic cars are the babies of the buyers. Vintage car lovers take their vehicle out once in a blue moon. Due to good maintenance and extra care, these automobiles enlist under the collector’s vehicle insurance policy. Surprisingly collector vehicles save up to 2/3rd of the premium cost in comparison to traditional old vehicles. Further, the owners choose low coverage plans for the vehicles as they rarely drive them.
Does manufacturing date make your car a classic?
If you think that a vehicle categorized as a classic is over 50 years old, then you shall get your facts right. In reality, most classic car insurers have a condition of 15 years old for an automobile to qualify as collectible. However, there can be other reasons for the collections, such as special vehicles or limited editions. With this in mind and other relevant factors, for which conventional auto insurance policies do not have provision, a collector’s insurance policy defines itself. To refer: car insurers deny the insurance of vehicles older than 25 years which may have no problem for age under collector car insurance policy.
The difference in agreement on the settlement value
During the purchase of car insurance, there is a settlement for the claim of total loss. This establishment is known as the actual cash value settlement for traditional vehicles that the company decides after damage. In contrast, for this matter, there is a separate agreement by agreed value provision for collector car insurance policy. Under this agreement, you and the insurer decide on the collector car’s value by mutual consent.
Undertaking factors of restoration
As you know, those who buy old cars often restore their automobiles sooner or later. Some may wish to upgrade it for performance and looks, or others may want personalization. Certainly, these improvements make the automobile different from its manufacturing state.
Auto insurers generally estimate the claim amount based on factory-installed equipment.
Due to the modification of vehicles, regular auto insurers have limitations on the claim for repair and the settlement value in the case of total loss claims.
In comparison, the collectible vehicle policy for classic cars does not disappoint you with such constraints. Under the policy considers the various improvements to offer the right amount for repair and total loss claim. These improvements include upgraded engines, disc brakes, or suspension. Further, installing power steering, interior decors, air conditioning, and many more are also considered.
Choose the best repair spots with expertise in classic cars.
You may not get help from the regular insurance company when fixing faults in your vintage automobile. They may send you to any second repair shop owing to dissatisfaction in service. But the specialists working with us know all about the genuine spots of treatment for your baby to give it a new life. Our classic auto insurance company in Texas regularly reviews the best repairs for classic cars and recommends the right list of places.
Pay lower deductible than regular auto insurance
Besides, you have the option of paying a lower deductible amount on buying a collectible auto insurance policy. Here, in Contrast to regular car insurance, the deductible amount is quite low. You can pay anything between 0 to $ 250 according to your plan. So, if met with an accident, in that case, you are lucky indeed to spend substantially less.
Is my 1956 vintage Aston Martin eligible for car insurance?
As discussed earlier, it’s not the age that makes your vehicle classic; in fact, it’s your love and care for the vehicle, which you rarely drive for your pleasure. Companies consider several factors for the insurance of collectible car in Texas, such as:
Condition for automobile
The car is older than 15 years or 25 years in the case of some insurers.
Your insurer may ask for proof of an alternative car that you regularly use with attention to consider your policy vehicle as collectible.
Then, the car is having a good storage condition and pristine cleanliness in the garage. The company may ask for proof also.
A vehicle shall have a smoothly running engine too.
Classic automobiles should have low mileage per year. For this reason, you shall not drive it more than 7500 miles annually.
The car is designed for on-road use in particular.
Most important of all, no damage to the vehicle is acceptable under the collectible car insurance policy.
Conditions for drivers
Drivers shall have a clean driving record for five years at least.
Under the conditions of some companies, policyholders may have an age limit of 25 years.
List of vehicles ideal for classic auto insurance in Texas
Range of cars can qualify for a classic car insurance policy if having good maintenance as well as meeting all the legal conditions. Generally, these vehicles are insured:
Old military vehicles.
Then, vintage cars with manufacturing between the 1920s to 1950s.
Collectible cars of the period between the 1950s to 1990s.
Also, vehicles of limited editions and special models.
Fourthly, older farm vehicle collection for automobile shows and display purposes.
Know about the coverage and pricing of classic car insurance
What is under the coverage of a collectible car insurance policy?
The collectible car insurance works similarly to a typical one-year car insurance policy; instead, you have special provisions and limits of usage and mileage. Whether you need comprehensive collision or liability coverage, you get it all.
Under the extra protection, you can choose medical insurance, uninsured driver coverage, and breakdown insurance.
Moreover, those who own high-cost classic automobiles shall choose comprehensive coverage to be safe from damage and burglary. It indeed includes the coverage for liability and collision.
On the other hand, you may have an alternative of additional coverages for road voyages, spare parts, or roadside aid.
Individual companies have different rates; we advise you to compare rates by auto insurance quotes in Texas.
How costly is classic car insurance?
Under the pricing, it would be no wonder if you are saving 30 % extra on the insurance of your classic car.
The factors generally affecting the pricing of your classic car include:
The value of your vehicle because of modification and up-gradation.
Then, the driving record of the owner.
Thirdly, the area where you are living.
All in all, this significant difference in the cost of classic car insurance is due to the less usage of this vehicle than regular cars.
Good and bad of buying classic auto insurance
Benefits of classic car insurance
Low-cost policy: Above all, you may get it at a lower price than a typical car insurance policy. On average, policyholders spend 30 % less while getting our classic car insurance.
Coverage for spare parts: No worries about the damage or theft of spare parts such as gear and tires.
Get cash settlement: Is your classic car stolen or totaled? Be easy; your policy cover pays back the cash payment during such events.
Loss in your absence: In case your car is away from you, such as at the automobile show during the time of loss, you still qualify for coverage.
On-road support: Your vehicle needs on-road support to prevent it from damaging further. Classic car insurance policy includes the cost of towing the vehicle during such matters.
Medical reimbursement at an auto show: While someone sustains an injury in your car at the auto exhibition, this coverage saves you from liability.
Disadvantages of classic car insurance
Rare discount opportunities: One of the drawbacks of classic car insurance is that, unlike regular car insurance discounts, you hardly get an opportunity to grab a saving offer on a classic auto insurance policy. Since your insurer is a smaller niche company, so you mostly miss discounts for services such as bundling policies.
Limit on mileage: Another demerit is mileage limit; in this case, your insurer may have a condition to not drive beyond a certain distance to be eligible.
Lesser driving coverage choices: Some classic car insurers include coverage options like roadside support and extended medical coverage. However, these are sometimes not available with the smaller companies.
Policy change: Your classic car insurance can change when you own a car accordingly the rise and fall in its value. At the same time, typical car insurance plans have no such change. To refer: If you are restoring your automobile, after finishing a restoration, you have to adjust the duration of the policy once again.
Are you missing to know something about our classic auto insurance in Texas?
Shall I buy the insurance for my classic off-road car?
If you own an expensive classic car, we definitely recommend you to get an insurance policy, as it’s a valuable belonging. As an owner, you will sigh relief while taking care of your favorite automobile over the years.
What to do to cut the cost of classic auto insurance in Texas?
Compare rates from as many brokers as you can, and choose the one with reasonable cost but good value.
The value of the vehicle can be low or high but buy a policy as per agreed valuation.
Even more, try to join any club or society of classic cars. In the meantime, you will have timely offers of vehicle insurance at a lower cost.
Jiyo Insurance - Get the cheap and quick quotes for classic auto insurance in Texas
To sum up, jiyo insurance is the premier service provider for general auto and classic car insurance in Texas and throughout the US via digital presence.
Primarily we are based in Texas, and you can easily access us anywhere in the province at different locations in any event. Whether you need a quote in Dallas, Fort Worth, El Paso, policy in Austin, Arlington, service in San Antonio, Corpus Christi, or an affordable plan of classic auto insurance in Houston, follow us.
You may be owning muscle cars, exotic cars, Cobra replicas, antique cars, military vehicles, or street rods; of course, we accept all these vehicles for a classic auto insurance policy.
In a word, hurry up and grab the opportunity today to save up to 40 % on your classic car insurance.
#classic cars#classic car insurance in usa#classic car insurance in texas#auto insurance quotes#auto insurance in Texas#auto insurance in San Antonio#Auto insurance in Houston#Auto insurance in Dallas#Auto insurance in Austin
0 notes
Text
Critical Review
My work explores the concept of transformation. In the beginning it was my intention to capture the ephemeral, an idea which I had whilst on a walk. Walking had become a kind of therapy for me during the lockdown as I had been confined at home - I have been saying in jest for the past year I’m on house arrest and my walks are my yard time, but I concede that’s an over exaggeration, even if it felt like it sometimes. The act of walking was my slice of the day where I could be on autopilot; it allowed me the time to just walk and think. Being in nature, I was observing the plants and flowers and began collecting them. I wanted to preserve my collection, to shift them from ephemeral to permanent objects. I primarily used air drying clay to achieve this, which I pressed my flowers into to create moulds from which I could take a positive cast. I had spent a few months perfecting this technique and working to this process and had eventually stockpiled a collection of botanical tiles in different colours and sizes, but my concept had stagnated somewhat by this point, similarly my daily walking route which I had enjoyed had begun feeling like an obligation. The repetition became tedious and is analogous of how I was feeling during lockdown; autopilot had lost its novelty. I had realised that my daily practice was like a production line, where I was manufacturing my art in batches from a mould and repeating the process. I began thinking about the modern world and particularly how technology and mass manufacturing had played a vital role in the worldwide response to the pandemic. I was interested in where the line is drawn between functional design and a work of art and sought to explore this in my project, which saw its own transformation going forward. The everyday object reimagined as fine art has been a subject of intrigue among artists and art lovers since the early Twentieth Century. Since the ready-made’s of Marcel Duchamp, the bounds of art have been redefined. The subject expanded in the 1960’s with the emergence of the Pop Art movement, with such artists such as Claes Oldenberg and Andy Warhol who created whimsical replicas of common household items, transforming the functional object into ornamental sculpture. In particular, Warhol’s work was a response to consumer culture, which transformed certain household brands into art world icons synonymous with his name. By looking at Dadaism, Surrealism and Pop Art, we can see some of the varying ways in which objects have been used in the past. The object form has been used as a means of expression of self, as a form that can be metamorphosed into things created from imagination, as a technical means of expression, as a social statement on society, and as a means of creating art which questions art itself. (Hanna, 1988) Today, the everyday object as art remains a pervasive subject in contemporary art. Tokyo-based artist Makiko Azakami is one such artist who transforms everyday objects by using only paper for her lifelike sculptures; ‘through careful cutting and meticulous handcrafting, Azakami breathes new life into humdrum objects and creates pieces that are deceptively fragile and extraordinarily detail-oriented’ (Richman-Abdou, 2016). Korean artist Do Ho Suh creates lifesize object-replicas of fittings and appliances around his apartment using wire and polyester fabric netting. The use of these materials transforms the objects from functional to ornamental whilst retaining their defunct detail, reframing the domestic object, and wider domestic space, as sculpture. ‘The transparency of the fabrics…is important conceptually because I’m trying to communicate something of the permeability in the ways in which we construct ourselves’ (Suh, 2020). Other contemporary artists use found objects in their work, which serve a specific purpose that the artist abandons, choosing to elevate the mundane to the realm of fine art, and dissolve the boundaries between “high” and “low” forms of culture. (Artnet News, 2017). In my own practice, I chose to study the everyday object of the lightswitch, the idea of which was suggested to me during a presentation of my work. I had built my installation around a lightswitch on my studio wall, an unconscious choice on my part, perhaps going to show just how on autopilot I was. I was interested in replicating the lightswitches around my home using subversive materials and experimenting with installations. I began by taking clay impressions of the lightswitches around my home from which I could make positive casts. This made me think about automation; I felt that the repetition of taking casts from a mould was similar to a production line, and I was the machine, similar to Warhol’s production line process of silkscreen printing, as Bergin writes: ‘The Machine is, to the artist, a way of life, representative of a unique field of twentieth-century experience, and all of Warhol’s art is striving to express the machine in the machine’s own terms’ (Bergin, 1967). Perhaps all art has an agenda; is any art made just for the joy of it? Or is it just to fulfil some demand? I began to wonder if all art, except for the earliest cave paintings, was produced purely to be consumed. If Warhol’s brillo boxes represent the collapse of the boundary between artistic creation and mass production (Baum, 2008), then where exactly is the boundary? I came to the conclusion that any commodifiable artwork is a product, and creating art is just another form of production for consumption. When I had my finished clay tile with a porcelain effect painted finish, I installed it on my kitchen wall and it at first glance appears to be a standard lightswitch, however when examined up close the viewers expectation is subverted, as you can see that it is a handmade replica. The functional design of the lightswitch is reimagined through the materiality of natural clay, transforming the object from a functional design into an ornamental replica. I had made many clay lightswitches already, and wanted to explore other subversive materials to utilise. Inspired by Rachel Whitereads resin replicas of doors, I began making coloured resin casts from my existing silicone mould, adding a different coloured resin pigment each time. Displayed as stacked on backlit shelves, the work invites the viewer to peruse as though they were in a supermarket, highlighting that art is another form of production for consumption in the modern world. I then began thinking about scale, but this time I wanted to use the intangible material of light itself as my medium. Using my transparent coloured resin tiles and a light projector, I projected onto my studio walls. This opened up a door to working digitally, working with media such as video and photoshop. Working in this way allowed me to explore scale as freely as I liked without time or space constraints. I began thinking that digital media is an imitation of the real thing. My projections, for example, are not really lightswitches, they are replicas of lightswitches made from the material of refracted light waves. My video gifs and my photoshopped site specific work are just information converted to binary numbers and translated into pixels on a screen. I think in this way, digital and electronic media are the ultimate subversive material. Overall, my project experienced a dramatic transformation. In the beginning, lockdown had only just begun and it was a new experience. Fourteen drudgerous months later, I am not the same person I was then. The whole world has had its own dystopian transformation, where we now so heavily rely on technology to survive. We have surrendered authentic experience for a pale imitation of the real thing. Our New Normal is just a replica of the life we left behind, subversive in the way that at a superficial glance all remains the same, but on closer inspection is just a substitution. As I made my tiles I was a machine, so too have we become machine men; just pixels on a screen or voices on the end of a line, a replica of the blood and cells and sinew and breath and acne that made us really human.
Artnet News In Partnership With Cartier, (2017) ‘11 Everyday Objects Transformed Into Extraordinary Works of Art’, artnet.com. Article published May 9, 2017. Available at: https://news.artnet.com/art-world/making-art-from-mundane-materials-900188.
Baum, R (2008) ‘The Mirror of Consumption’, essay published in Andy Warhol by Andy Warhol . Available at: https://www.fitnyc.edu/files/pdfs/Baum_Warhol_Text.pdf. p.29.
Bergin, P (1967) ‘Andy Warhol: The Artist as Machine’, Art Journal XXVI, no.4. Available at: https://www.yumpu.com/en/document/read/8274194/andy-warhol-the-artist-as-machinepdf-american-dan. p.359.
Hanna, A (1988) OBJECTS AS SUBJECT: WORKS BY CLAES OLDENBURG, JASPER JOHNS, AND JIM DINE, Colorado State University Fort Collins, Colorado, Spring 1988. Available at: https://mountainscholar.org/bitstream/handle/10217/179413/STUF_1001_Hanna_Ayn_Objects.pdf?sequence=1
Richman-Abdou, K (2016) ‘Realistic Paper Sculptures of Everyday Objects Transform the Mundane Into Works of Art’ , Mymodernmet.com. Article published October 20, 2016. Available at: https://mymodernmet.com/makiko-akizami-paper-sculptures/?context=featured&scid=social67574196&adbid=794911171832901632&adbpl=tw&adbpr=63786611#.WB3vIQMclAU.pinterest.
Suh, D H (2020) HOW ARTIST DO HO SUH FULLY REIMAGINES THE IDEA OF HOME, crfashionbook.com. Article published MAY 22, 2020. Available at: https://www.crfashionbook.com/mens/a32626813/do-ho-suh-fully-artist-interview-home-korea/
0 notes
Text
Application Migration to Cloud: Things you should know

Businesses stand a chance to leverage their applications by migrating them to the cloud and improving cost-effectiveness and scaling-up capabilities. But like any other migration or relocation process, application migration involves taking care of numerous aspects.
Some companies hire dedicated teams to perform the migration process, and some hire experienced consultants to guide their internal teams.
Owing to the pandemic, the clear choice to migrate applications is to the cloud. Even though there are still a few underlying concerns about the platform, the benefits outweigh the disadvantages. According to Forbes, by 2021, 32% of the IT budgets would be dedicated to the cloud.
These are some of the interesting insights about the cloud, making it imperative for application migration.
Overview: Application Migration
Application migration involves a series of processes to move software applications from the existing computing environment to a new environment. For instance, you may want to migrate a software application from its data center to new storage, from an on-premise server to a cloud environment, and so on.
As software applications are built for specific operating systems and in particular network architectures or built for a single cloud environment, movement can be challenging. Hence, it is crucial to have an application migration strategy to get it right.
Usually, it is easier to migrate software applications from service-based or virtualized architectures instead of those that run on physical hardware.
Determining the correct application migration approach involves considering individual applications and their dependencies, technical requirements, compliance, cost constraints, and enterprise security.
Different applications have different approaches to the migration process, even in the same environment of technology. Since the onset of cloud computing, experts refer to patterns of application migration with names like:
· Rehost: The lift-and-shift strategy is the most common pattern in which enterprises move an application from an on-premise server to a virtual machine into the cloud without any significant changes. Rehosting an application is usually quicker compared to migration strategies. It reduces migration costs significantly. However, the only downside of this approach is that applications would not benefit from the native cloud computing capabilities without changes. Long-term expenses of running applications in the cloud could be higher.
· Refactor: Also called re-architect. It refers to introducing significant changes to the application to make sure it scales or performs better in the cloud environment. It also involves recoding some parts of the applications to ensure it takes advantage of the cloud-native functionalities like restructuring monolithic applications into microservice, modernization of stored data from basic SQL to advanced NoSQL.
· Replatform: Replatforming involves making some tweaks to the application to ensure it benefits from the cloud architecture. For instance, upgrading an application to make it work with the native cloud managed database, containerizing applications, etc.
· Replace: Decommissioning an application often makes sense. The limited value, duplicate capabilities elsewhere in the environment, and replacement are cost-effective with something new to offer, such as the SaaS platform.
The cloud migration service market value was USD 119.13 billion. It is predicted to reach USD 448.34 billion in the next six years by 2026. A CAGR value of 28.89% is forecasted from 2021 to 2026.
Key Elements of Application Migration Strategy
To develop a robust application management strategy, it is imperative to understand the application portfolio, specifics of security, compliance requirements, cloud resources, on-premise storage, compute, and network infrastructure.
For a successful cloud migration, you must also clarify the key business driving factors motivating it and align the strategy with those drivers. It is also essential to be more aware of the need to migrate to the cloud and have realistic transition goals.
Application Migration Plan
There are three stages of an application migration plan. It is critical to weigh potential options in each stage, such as factoring in on-premise workloads and potential costs.
Stage#1: Identify & Assess
The initial phase of discovery begins with a comprehensive analysis of the applications in the portfolio. Identify and assess each process as a part of the application migration approach. You can then categorize applications based on whether they are critical to the business, whether they have strategic values and your final achievement from this migration. Strive to recognize the value of each application in terms of the following characteristics:
· How it impacts your business
· How it can fulfill critical customer needs
· What is the importance of data and timeliness
· Size, manageability, and complexity
· Development and maintenance costs
· Increased value due to cloud migration
You may also consider an assessment of your application’s cloud affinity before taking up the migration. During the process, determine which applications are ready to hit the floor as it is and the ones that might need significant changes to be cloud-ready.
You may also employ discovery tools to determine application dependency and check the feasibility of workload migration beyond the existing environment.
Stage#2: TCO (Total Cost of Ownership) Assessment
It is challenging to determine the budget of cloud migration. It is complicated.
There will be scenarios like “what-if” to keep the infrastructure and applications on-premise with the ones associated with cloud migration. In other words, you have to calculate the cost of purchase, operations, and maintenance for hardware you want to maintain on the premise in both scenarios, as well as licensing fees.
The cloud provider will charge recurring bills in both cases and migration costs, testing costs, employee training costs, etc. The cost of maintaining on-premise legacy applications should be considered as well.
Stage#3: Risk Assessment & Project Duration
When the final stage arrives, you have to establish a feasible project timeline, identify potential risks and hurdles, and make room.
Stage#4: Legacy Application Migration to The Cloud
Older applications are more challenging to migrate. It can be problematic and expensive to maintain in the long run. They may even present potential security concerns if not patched recently. It may also perform poorly in the latest computing environment.
Migration Checklist
The application migration approach should assess the viability of each application and prioritize the candidate for migration. Consider the three C’s:
· Complexities
- Where did you develop the application – in-house? If yes, is the developer still an employee of the company?
- Is the documentation of the application available readily?
- When was the application created? How long was it in use?
· Criticality
- How many more workflows or applications in the organization depend on this?
- Do users depend on the application daily or weekly basis? If so, how many?
- What is the acceptable downtime before operations are disrupted?
- Is this application also used for production and development and testing, or all the three?
- Is there any other application that requires uptime/downtime synchronization with the application?
· Compliance
- What are the regulatory requirements to comply with?
Application Migration Testing
An essential part of the application migration plan is testing. Testing is vital to make sure no data or capability is lost during the migration process. You should perform tests during the migration process to verify the present data. It ensures data integrity is maintained and data is stored at the correct location.
Testing is also necessary to conduct further tests after the migration process is over. It is essential to benchmark application performance and ensure security controls are in place.
Steps of Application Cloud Migration Process
#1: Outline Reasons
Outline your business objectives and take an analysis-based application migration approach before migrating your applications to the cloud.
Do you want reduced costs? Are you trying to gain innovative features? Planning to leverage data in real-time with analytics? Or improved scalability?
Your goals will help you make informed decisions. Build your business case to move to the cloud. When aligned with key objectives of the business, successful outcomes realize.
#2: Involve The Right People
You need skilled people to be a part of your application migration strategy. Build a team with the right set of people, including business analysts, architects, project managers, infrastructure/application specialists, security specialists, experts in subject matter, and vendor management.
#3: Assess Cloud-Readiness of the Organization
Conduct a detailed technical and business analysis of the current environment, infrastructure, and apps. If your organization lacks the skills, you can consult an IT company to provide an assessment report on cloud readiness. It will give you a deep insight into the technology used and much more.
Several legacy applications are not optimized to be fit for the cloud environments. They are usually chatty – they call other services for information and to answer queries.
#4: Choose An Experienced Cloud Vendor to Design the Environment
Choosing the right vendor is critical to decide the future of work – Microsoft Azure, Google Cloud, and AWS are some of the most popular platforms for cloud hosting.
The apt platform depends on specific business requirements, application architecture, integration, and various other factors.
Your migration team has to decide whether a public/private/hybrid/multi-cloud environment would be the right choice.
#5: Build the Cloud Roadmap
As you get an in-depth insight into the purpose of cloud migration, you can outline the key components to make this move. The first moves are the business priority and migration difficulties. Investigate other opportunities, such as an incremental application migration approach.
Keep on improvising the initial documented reasons to move an application to the cloud, highlight the key areas, and proceed further.
A comprehensive migration roadmap is an invaluable reserve. Map and schedule different phases of cloud deployment to make sure they are on the right track.
Conclusion
The application migration approach can start new avenues for changes and innovations, such as application modernization on the journey to the cloud. Several services are already available to assist enterprise strategies, plans and execute successful application cloud migration. But you must always choose to go for application migration consulting before going onboard.
0 notes
Text
QualityManagement
According to the Center for Business and Economic Research, each $1 generates a $ 3 increase in profit and $16 in cost reduction. Those who work in Quality often refer to the Quality Management System isn't a machine or an application, but is the inherent Quality process architecture on which the company stays. The term"EMS" includes each of the people, stakeholders, processes, and technologies which are involved with a company's Culture of Quality, in addition to the key business objectives which make up its own goals. Quality is both an outlook and a way to increasing customer satisfaction, decreasing cycle time and costs, and Quality management errors and rework employing a set of tools like Root Cause Analysis, Pareto Analysis, etc.. These include procedures, tools and techniques that are used to make certain that the outputs and benefits meet customer requirements.The first element, quality preparation , involves the preparation of an excellent management plan that describes the processes and metrics that will be used. The quality management strategy has to be agreed to ensure that their expectations for quality are correctly identified. The procedures should adapt to culture the processes and values of their host organisation. It validates the use of processes and standards, and ensures staff have the correct understanding, skills and attitudes to fulfill their project roles and duties in a competent manner. Quality assurance has to be independent of the project, developer or portfolio to which it applies.The next part, quality management , is composed of inspection, measurement and testing. It verifies that the deliverable adapt to specification, are fit for purpose and meet stakeholder expectations.

Quality management actions determine whether approval criteria have, or have not, been met. In order for this to be effective, specifications must be under strict configuration control. It is possible that, once agreed, the specification might need to be modified. Commonly this will be to adapt change requests or issues, while maintaining acceptable time and price constraints. A P3 maturity model provides a framework against which continuous progress embedded and can be initiated into the organisation. Projects which are part of a programmed may well have a lot of the quality management plan developed at developer degree to make sure that standards are in accord with the rest of the programmer. This might appear to be an administrative burden on day one of projects that are smaller, but is rewarding from the end. Projects deliver outputs which are subject to forms of quality management, Quality management upon the technical nature of the work and codes affecting particular sectors. Examples of scrutinizing Collars include crushing samples of concrete used in the bases of a building; x-rating welds in a boat's hull; and following the evaluation script to get a fresh piece of software.Inspection produces tools and data such as scatter diagrams, control charts, flowcharts and cause and effect diagrams, all which help to understand the caliber of work and how it might be improved. The main contribution to constant improvement that may be made within the timescale of a project is by way of lessons learned. Existing lessons learned should be consulted at the beginning of every job, and any lessons utilized in the preparation of this project documentation.
In the end of each project, the lessons learned should be documented as part of the post-project inspection and fed back into the understanding database.The duty of the programmer management team is to create a quality management plan that encompasses the diverse contexts and technical requirements contained within the developer. This sets the standards for the job quality management also functions as a plan for quality in the advantages realization parts of the programmer and programs. A comprehensive quality management plan at programmer degree can greatly reduce the effort involved in preparing project-level quality management plans. Quality management of outputs is mainly handled at job level, however, the developer may get involved where an output from one project is an input to another, or where extra review is needed when outputs from two or more jobs are delivered . The developer is responsible for quality management of advantages. This is a complex task since the acceptance criteria of an advantage may cover subjective as well as measurable factors but benefits should be defined in measurable terms to ensure quality management can be applied.The average scale of programmed signifies that they have a very useful role to play in continuous progress. Programmer assurance will ensure that jobs do take lessons learned into account and then capture their lessons along with this understanding database.The very nature of a portfolio means that it is unlikely to need a portfolio grade management program.
0 notes
Text
Approaches to Internal Revenue Service Tax Obligation Financial Obligation Alleviation
"This English word comes from Latin Taxo, ""I estimate"". Taxing consists in imposing a monetary cost upon a person. Not paying is generally culpable by law. They can be identified as indirect taxes (a cost imposed straight on a person and gathered by a greater authority) or indirect taxes (troubled items or solutions as well as eventually paid by consumers, a lot of the moment without them recognizing so). Purposes of taxation The initial unbiased tax ought to accomplish is to drive human growth by offering health and wellness, education and also social security. This purpose is additionally very crucial for a steady, effective economy. A 2nd goal and also a consequence of the very first is to reduce poverty as well as inequality. Typically, individuals making more are proportionally exhausted much more as well.
The furtherance of the individual revenue tax in the United States has a prolonged - and some would claim unsteady - history. The Starting Fathers consisted of specific speech in the Constitution regarding the authority of the Federal Federal government to exhaust its residents. Specifically, Article 1; Section 2, Condition 3 states
Government tax obligation is utilized for satisfying the expense both earnings and also capital. Earnings expenditure goes in the direction of running the federal government and in the direction of gathering the government tax obligation. Capital expenditure goes towards building the facilities, funding assets and also various other types of financial investment generating long-term returns and benefits to people. It needs to always be the endeavor of the government to fulfill its earnings expense out of government existing taxes and build possessions at the same time for long-term view.
Last, of all, it is to be said that, a service requires a Government Tax obligation Identification Number or Employer Tax Identification Number so that they can keep their very own photo or entity in the marketplace. It is to be kept in mind that, the tax ID number might not be moved in case of the moving of any service. If the structure or ownership would certainly be changed after that a brand-new tax obligation ID number is needed for the business. But above all, you have to collect the pertinent information to obtain an EIN.
The courts have actually usually held that direct taxes are restricted to tax obligations on individuals (variously called capitation, poll tax or head tax obligation) and property. (Penn Mutual Indemnity Co. v. C.I.R., 227 F. 2d 16, 19-20 (third Cor. 1960).) All other taxes are generally described as ""indirect taxes,"" because they strain an occasion, rather than an individual or home per se. (Guardian Machine Co. v. Davis, 301 U.S. 548, 581-582 (1937 ).) What seemed to be a straightforward constraint on the power of the legislature based upon the subject of the tax obligation proven inexact as well as uncertain when related to an income tax, which can be perhaps checked out either as a direct or an indirect tax.
youtube
Definitions of Tax Mitigation Evasion and Evasion It is difficult to share an accurate examination as to whether taxpayers have prevented, evaded or merely reduced their tax obligation responsibilities. As Baragwanath J said in Miller v CIR; McDougall v CIR: What is legit 'reduction'(suggesting evasion) and what is bogus 'avoidance'(suggesting evasion) is, in the long run, to be determined by the Commissioner, the Taxation Review Authority and also ultimately the courts, as a matter of judgment. Please note in the above declaration the words are precise as mentioned in the judgment. However, there is a mix-up of words that have actually been cleared up by the words in the brackets by me. Tax Obligation Mitigation (Evasion by Planning) Taxpayers are entitled to alleviate their obligation to tax and will certainly not be at risk to the general anti-avoidance rules in a statute. A summary of tax obligation mitigation was offered by Lord Templeman in CIR v Difficulty Business Ltd: Income tax obligation is reduced by a taxpayer who reduces his income or incurs expenditure in scenarios which reduce his assessable revenue or entitle him to a reduction in his tax obligation liability.
Over the years lots of have actually enjoyed countless examples of such tax arbitrage making use of components in the regulations at the time. Examples are money leasing, non-recourse borrowing, tax-haven(a country or marked area that has low or no tax obligations, or highly deceptive financial institutions as well as usually a warm climate and sandy beaches, that make it eye-catching to foreigners bent on tax avoidance and also evasion) 'investments' and also redeemable choice shares. Low-tax policies gone after by some countries in the hope of bring in global companies as well as resources are called tax competition which can give an abundant ground for arbitrage. Economists normally prefer competition in any kind. Yet some claim that tax competitors is commonly a beggar-thy-neighbor plan, which can decrease an additional nation's tax base, or compel it to transform its mix of taxes, or stop it straining in the way it would like.
Tax collection is done by a company that is specifically designated to perform this feature. In the USA, it is the Irs that does this feature. There are charges entailed for failing to comply with the regulations and policies set by controling authorities relating to taxes. Charges might be enforced if a taxpayer falls short to pay his taxes in full. Charges may be civil in nature such as a penalty or loss or may be criminal in nature such as incarceration. These charges may be troubled an individual or on an entity that fall short John Du Wors Attorney to pay their tax obligations in full.

Financial institutions were the initial to impose service tax on their clients. From the moment of their beginning, they regularly expressed solution expenses in the form of processing fees. The duty of accumulating the levy is left with the Central Board of Import Tax and Traditions (CBEC), which is an authority under the Ministry of Financing. This authority develops the tax obligation system in India."
0 notes
Text
Exactly how To Save Dollars with Fortnite Free V Bucks?
What Is Fortnite Then The Digital Currency V
Fortnite is a house sandbox survival video game developed by Citizens May Journey with Epic Games. This is a freshly discovered space in a person from the Epic Games forums - it seems a number of Fortnite "movie" cases were actually damaged or corrupted. According to Marksman, selling Fortnite codes is a safer choice than selling broken-into accounts, although the accounts might be more beneficial (one seller I converse with was selling an bill with few skins for $900). Participants can heal stolen story in contacting Epic Games' leg with adjusting their information. The regulations are immaterial.
The Fortnite World Cup Online tournaments get started in April 13 and will run every week for twenty weeks around every place. The semi-finals session takes place on the Saturday and is three times long, with a 10 match limit. If you place in the top 3000 players, you'll be able to take cut from the closing by Saturday. Fortnite Week 1 Problems of the Period 8 Battle State happen now, like go to all Pirate Camps record with giant face locations. Clear at least some on the several problems to gain 5,000XP. This set was generated on Feb 28, 2019.
These virtual coins can be purchased on the public Fortnite store as well as vendors including Microsoft and COMPETITION. Still, with 1,000 coins costing roughly $10, there is a market for discounted coins which are eagerly took in place beside persons. Now this is a very indirect problem I have asked. Simply because Fortnite is simply a single kind of competition so most games could really understand Fortnites model could they? Or could they? I think they can. That clearly requests to online games, particularly activities like Name of Payment and FIFA.
Fortnite is control without V-Bucks, vbucks.codes presented me with countless total of v-bucks, to enjoy all of Roblox. While there is a copyright struggle against the game, it now appears Fortnite is sound, for now. Bloomberg records that PUBG Corp. delivered "a note of drawback" to Epic Games lawyers by Saturday, and that the circumstances is right now met. While that rests unclear just how much cash criminals have gotten to make through Fortnite, over $250,000 were received in Fortnite articles in eBay in a two-month period last year. Think from Sixgill also present an increase in the number of mentions from the competition on the black confusion, with point link with the game's revenue.
This swagbucks link will allow that you get a free 3$ importance of positions when you earn only 3$ worth. It will allow you to get at least the nice stuff in fortnite. Squad up with fortnite v bucks your friends and get a Xbox One X 1TB console, Xbox wireless controller, Fortnite Battle Royal, Legendary Eon cosmetic set, with 2,000 V-Bucks. How For Free V Dollars In Fortnite? Here is really the dilemma that Suddenly being augmented through the many Fortnite Game Players. The primary purpose is because; with V Bucks, it is possible to readily access most from the items in Fortnite game.
Part of Fortnite's growth to authority has no doubt been their cross-platform availability, with everyday mobile gamers on the move able to get involved with great bedroom gamers in equal footing ( Sony was hesitant , but cross-play functionality has lately been helped for PS4 players ). Most from the FORTNITE V BUCKS GENERATOR websites out there are try to assure people to somehow their developers managed to hack into the FORTNITE Database, and thus they can easily acquire the unlimited free v-bucks in Combat Royale game.
Figure 1: Data demonstrates the estimated revenue of Fortnite compared to PlayerUnknown's Battlegrounds between August 2017 — June 2018, based on the Edison Trends dataset. While you can't directly gift V-Bucks to another person, you have a several options to help them get their Fortnite fix: purchase them a gift license for the software of choice, or buy a bundle with limited information.
VBUCKS - Relax, It's Play Time
Once the recording starts, you'll see a preview in the upper-right area regarding your game (which you can minimise, if you want). This survey window enables people suddenly toggle the microphone and webcam happening then sour, and you can also click on the "chat" connect to witness exactly what everyone is claim around your terrible Fortnite death streak. If you're question what items come in the Fortnite shop today, about the day to you're looking at that, you can control onto your Fortnite Battle Royale shop items guide So be updating that web site every generation, to suggest all the make new things to Epic gets to the Fortnite store.
If you want help with receiving the fortnite exchange in xbox living or playstation or microsoft marketplace then talk about the blog. If you cannot see and struggling to download fortnite game then write to us right now to get bear on the generator. V-Bucks are the most popular in-game get for Fortnite players, finishing up 83% of things bought and 88% of committing. While different V-Bucks amounts are intended for hold, the sole most popular Fortnite thing is 1,000 V-Bucks for $9.99, accounts for 53% of articles purchased and 33% of use.
Credit for enjoying the Fortnite: Battle Royale & Fortnite: Save The planet videos! Want much more? I post daily Fortnite videos or anything interesting for Fortnite Battle Royale. Fortnite's primary also much less popular horde mode offers daily login bonuses, daily obstacles, and prizes for Storm Shield Defense missions. They become bright and relaxed ways to get a small sum of currency each day, although you'll should actually acquire the form.
Being the best at Fortnite Battle Royale is no simple accomplishment, and there's no reliable way to achieve a Win Royale every time. Yet, while we can't guarantee you'll end from the best five whenever you enjoy, the point to performing Fortnite Battle Royale should help you out-survive your peers often than not. That important to learn where chests are placed in Fortnite - Battle Royale. That gives you the immense advantage. You can remember every individual spot but the easier approach is to discover them on this effort.
All the Fortnite Battle Royale tips you need, plus Fortnite Android info, free V Bucks, and Fortnite Server status updates. Fortnite is the best battle royale video competition of 2018. PUBG is a similar game, although it is quality is much less than Fortnite's. The essential subject is whether the parties used in Fortnite emotes are copyrightable material kept under US law. If not, then Epic Games' use of the parties is not copyright infringement, and in-game purchase of the particular parties can continue unfettered.
The Secrets To VBUCKS
More just: Playing "Fortnite" is open, but progressing through the game's loot-unlock system is not. While people follow now to the interior Fortnite game free v-bucks no individual verification you will be presented rotating missions in the daily Quest machine. Once you have finished each one, you'll receive the free V-bucks and you will be capable of spend them about things to the conflict Royale mode.
So these were some of the best ways that may be used to obtain free v challenges in fortnite game without using any real income. Fortnite Battle Royale is an online multiplayer survival shooter developed by Epic Games , in which 100 players fight to be the last part stance with activity lasting less than thirty seconds. It is a free game using some mechanics from the first Fortnite, a success sandbox game, and many aspects on the battle royale” variety of matches.
Your bill confidence is our top priority! Keep the account in enabling 2FA. As a compensate for defending the account, you'll uncover the Boogiedown Emote in Fortnite Battle Royale. It's unclear at this time the way the Buried Treasure item will work, however. It'd make sense if the item somehow exposed a few, powerful tool hidden somewhere around the map—but most Fortnite fans don't know what to expect quite yet.
So that's everything you need to know by finding stuck in Fortnite and Fortnite Battle Royale. We'll see anyone inside channels. In reply , Epic group member darkveil” said, Yes!” The plan is in place but it is theoretically a little difficult,” according to the comment. Epic wants to give mini-BR” competition in Creative where users can configure the Tornado, use the bus stop feature, and other events in the primary Fortnite battle royale island.
To get Fortnite on a PC or Mac, you'll call for a good Epic Games account. Unfortunately, there doesn't look like a simple way to stop childen by acquiring items by the credit, when they see the password, though we've asked Epic Games and will update this article when we consider back. The best Instant Sound Options in the best soundboard for Fortnite. Use it in Reception, with Activity or After Death! Create your best moments in participating online.
Flamemaster”, a tenth grader, says they are, Annoying, obnoxious, toxic, and infuriating.” What is now wrong? Of course, every activity say its drawbacks, then I live not looking to demonstrate how Fortnite is a dangerous game, only fair how many people who show that have ruined what might have been a decent game. You will receive an e-mail alert if the estimate of Fortnite - 10,000 (+3,500 Bonus) V-Bucks will drop.
Fortnite Battle Royale happens immediately offered in Android and IOS stores on free of charge. This great activity was constructed and put out by ‘Epic Games' company. The constraints to function this activity are large, recommended to use flagship way as the game counts on weight whole planet road. Through us you are able to walk free Fortnite V Bucks without completing any annoying surveys before getting yourself barred in the game. Some Fortnite hacks include illicit bots that can be dangerous. Instead over the website we deal anyone the opportunity to get V Bucks without completing surveys, using illicit bots, or additional illicit means.

youtube
You can turn off hardware acceleration in Google Chrome so that background applications performed in Chrome do not consume too many resources when you become showing Fortnite. Free Fortnite V Bucks Generator Finest and Straightforward Way for 2019. Yet, if you're not, then pay attention to this area and understand very precisely. Because those fake V Bucks Hack can cause your ‘Fortnite account banned or blocked‘ if you dropped to the corner.
Having moving storm ranges in creative would and become lucky received from the Fortnite competitive group when persons will be able to consistently practice hectic end-game scenarios commonly established by LAN events. Fortnite could also follow PUBG's example also put news plans to season up the gameplay (but made with Fortnite's signature, goofy style). Other vehicles could be another interesting way to look happening, with competitors PUBG , H1Z1 now Call of Task: Blackout successfully featuring vehicular gameplay.
0 notes
Video
youtube
I watched The Hard Parts of Open Source by Evan Czaplicki not too long ago and my socks were blown right to the moon! I had never heard of Czaplicki before (although I have met a couple of Elm enthusiasts) and I'm really wishing at this point that I'd come across him sooner.
Czaplicki's engaging, humble, and winsome throughout. I feel like it's somewhat rare for me to come across a talk like this and by the end of it really feel like I've found someone who could legitimately be a role model for me. He wants to solve hard problems in tech but he doesn't want to beat people over the head. He thinks he has a good way to solve something and he's OK if you think you have something better. And more than that he wants to engage with you over your ideas kindly and generously, understanding that you likely came to your conclusions because you're solving for different constraints than him, not because you're a cotton-headed ninny muggins!
The talk itself is a sweeping engagement with techno-social realities that I think is relevant far outside of the purported subject area ("Open Source Communities And The Challenges They Face"). We live in extraordinarily polarized times. Online communities dial all our instincts up to 11 and let them loose upon anonymized but still oh-so-human victims. We need something that will help course-correct the way we deal with each other, and quite frankly the problem isn't largely how we deal with each other in person but how we deal with each other online, whether in an Open Source Community or Facebook Group.
The thesis of the talk, not to bury the lead, is that exactly that. Online spaces are "viral" by design. They seek to evoke the most extreme of our feelings constantly so that we can't help but feel the need to return to them all the time. The barrier to entry back into them is in most our pockets and those little moments of boredom can be translated into delivering the perfect rebuttal back to that idiotic curmudgeon DenverCoder9. This viral design employees the most sophisticated behavioral technology on the planet to ensure our engagement and the parties responsible for designing them have a deep distrust for regulatory forces of any kind. Simply put, there isn't an effective countervailing force out there to fight this. So Czaplicki suggests that at least in the spaces we engage in we might be able to employ some of the same behavior technologies to alter the way we engage with each other at least in those areas and hopefully find some way to counter the use of these technologies for nefarious purposes (like rigging elections).
An Outline Of The Talk
The talk as a whole is structure liked so:
Open Source communities are emotionally taxing, especially on the creators of the thing being gathered around who are often secondarily in charge of them. Czaplicki believes this is because of certain normative behaviors that exist in Open Source communities (or online communities generally).
Czaplicki has noticed that this problems seems to be almost unique to online spaces and believes that this can be traced back to two fundamental emphases in cyber culture:
Absolute freedom is an unambiguous good. Any attempt at control must be met with at least suspicion if not outright hostility.
Engagement has arisen as the most obvious profit center for businesses, and incredibly effective behavior technology has emerged to make engagement something possible to manipulate.
These two points are intertwined. Czaplicki believes that Engagement is unambiguously controversial but that the primary tools we have for changing technical realities all emphasize Freedom over all else and so we don't have an effective toolset with which to fight Engagement.
The thesis of the talk, then, is that Online spaces are viral by design, ratcheting all human responses to max constantly. They're design this way because doing so because it makes a profit (As MLK said: "Every condition exists because someone profits by its existence.") and making a profit is the natural end of unrestrained Freedom.
But the very tools being used to control us (Behavioral Technology as elucidated by Nudge) could be used to construct online spaces that are as pleasant to be a part of as can be expected of diverse communal spaces.
He calls this Intentional Communication and takes the time to outline quite a few pragmatic suggestions to the design of online spaces that he thinks would help to Nudge interactions in the right direction.
Some More Detail About Specific Points
Open Source Community/Online Spaces Anti-Patterns
In terms what makes Open Source Communities difficult, he outlines some painfully familiar [[][anti-patterns]].
"Why don't you just…"
This is the all too familiar chime in from the peanut gallery, generally by someone fresh to the project, who just can't understand why this behavior or that design decision was made the way it was. They're generally convinced that in the 5 minutes they've been here they've seen the obvious solution and can't understand why it wasn't done 4 1/2 minutes ago.
Part of the problem this makes for maintainers is that in reality every one of these comments needs a careful, measured, and friendly response, or you'll immediately be labeled as a jerk. Documentation can be helpful on this point but there's only so much you can write to anticipate every new complaint. On even moderately successful projects the rate of these suggestions coming is also likely to overwhelm the paltry volunteer force you have built up around you, especially if you didn't take the time to painstakingly document every single design decision you made or how you've prioritized your time.
"On whose authority?"
The title of this section comes directly from a post written to the Clojure community from someone who was "Done with Clojure", at least in part because it's seen as a 'closed' language with almost all control over it's development and direction directly in the hands of its original creator, Rich Hickey. The anti-pattern is trying to capture the notion that "authority" is generally viewed as suspicious and in most cases probably inhibiting. Individual empowerment to the ends the individual desires to be empowered to is the ultimate goal of technology in this worldview.
"All discussion is constructive"
In other words, flat is better and tone doesn't matter. This is so consistent in online spaces (and more technical organic spaces as well). The idea that it's the responsibility of the listener to interpret the message rather and respond based on the pure logical content rather than the responsibility of the speaker to be careful about how they word things and present the information in a way that's sensitive to all of the concerns present in the moment, whether factual or sentimental.
More than that it's about the idea that everyone deserves a seat at the table all the time and that they can express themselves however they see fit, and that it's our responsibility to hammer at our opponents until they cry 'yield' at us and admit that we're right (or we do so instead).
The quote I loved here is:
Constructive discussion is about mutual understanding, rather than mutual agreement.
I want that to be part of my life all the time. Recognizing that discussion is first and always about mutual understanding and only potentially about agreement is powerful to me.
Who fears regulation and why do they fear it?
Czaplicki traces a really interesting thread with the help of a documentary called All Watched Over by Machines of Loving Grace and a book called From Counterculture to Cyberculture to try to explain why online spaces in particular have been so rife with these sorts of anti-patterns. The idea is that, in both a New Age sense and a technical sense, we have become as gods in our power to manipulate the environment and each other to accomplish the ends we wish to. Old forms of power like Governments, Religions, and Societies have failed to produce the utopia we wish to live in, but we now, through the technology, have the power to create that utopia ourselves, and will do so digitally. But in order to do so, the one thing that cannot be violated is our Freedom. Freedom is the right by which we may use technology to shape our future in the way we see fit. Any controls placed upon us (especially controls by the failed hierarchical structures) will inhibit that and thus must be resisted.
This emphasis on Freedom is evident in every major online space I can think of. Community controls are just now starting to be in vogue but they're still seen largely as impediments by many and as inadequate by others. And still the companies putting them in place seem reluctant for the most part. This is in part because of the emphasis on Engagement as a profit center for companies but also because of the participants beliefs that we can shape our future only when we have total control.
Let's talk, then, about Intentional Communication and how it could be a tool to make more effective online spaces.
What Is Intentional Communication?
The idea of intentional communication is essentially the realization that the same tools that have been used to increase engagement for the purposes of selling things could instead be used to encourage us to communicate in more productive ways.
For instance in online conversations in open source communities conversations could open with a declaration of intent that suggested to the person what kinds of communication are appropriate here. Are you here to learn? If you are, what's your background? How long have you been using Elm? What other languages do you know? Once you've answered that then you can ask your question and since you've provided a good deal of context a question that could be easily misinterpreted without that can now be understood.
Then, when answering the question the person answering can likewise be guided. They can be encouraged to restate the question, give their answer, and provide citations. They can be encouraged to thank the questioner as well. This can go in a cycle until both indicate that they're satisfied.
I love the idea that in this cycle the concept of 'Yelling angrily' isn't reachable.
You also don't have to forbid free self-expression either. You just need to create a context for that.
This idea can then be extended to other contexts. You can apply deescalation nudges like encouraging people to not respond too rapidly. You can apply writing style nudges like checking for wordiness. You can protect against communities being dominated by a few individuals by throttling posts by the same person. You can allow people to react to contributions in more productive ways by giving them more options regarding how to react than just a thumbs up or thumbs down.
My Takeaways
As I said in the beginning, this talk really blew me away. I got every book from the reading list from my library and devoured them. I want to do something to further the development of Intentional Communication. I think our ability to communicate with each other is a major crisis in our times. The Internet has made this worse, not better. Unbounded economic growth targets have united with unprecedented behavioral technology to produce a society that's constantly simmering just below the boiling point. I don't know where I'll go with this but at the very least I think more people need to engage with this.
0 notes
Text
Practical Problems Machine Learning and Artificial Intelligence are Still Facing
The way Machine Learning and Artificial Intelligence have streamlined our civilisation has been a matter of fascination. No wonder, AI and ML have changed the way industries operate. Although it has cast a deep impact on our lifestyles, Artificial Intelligence still has a long way to go.
In fact, several practical concerns related to Machine Learning and Artificial Intelligence are yet to be solved. ML fails to resolve certain tasks, and the talent deficit has always been in the background. Besides, data was never free from elements affecting it.
In this article, you will come across some of the practical issues related to Artificial Intelligence.
Reasoning ability
One of the constraints of ML algorithms is the lack of reasoning ability, beyond the application they are intended for. Reasoning power forms a powerful human trait. The AI algorithms are developed for specific purposes.
Coming to applicability, the available algorithms need to be narrowed down. Therefore, they are unable to reason out why a given method is being used, or the way their own outcomes get introspected.
Take the instance of an algorithm used in image recognition. In a particular scenario, it can differentiate oranges from apples. However, it is unable to determine whether a particular fruit is in good condition. Humans can explain this process mathematically. However, ML is unable to do so.
Scalability
Although AI applications are being used significantly, one needs to consider the scalability factor. The growth rate of data is very fast, and it comes in several forms. In the process, it becomes difficult to scale a project based on machine language.
Algorithms do not have much to do in this case. It is necessary to update these algorithms on a regular basis, as new changes are necessary to deal with data. Therefore, a regular human interference is required for scaling the project. ML and AI are not potential enough to resolve this problem.
Besides, when an increasing volume of data is shared on a platform backed by ML, it needs to be examined using intuition and knowledge. Machine Learning lags behind human intelligence, when it comes to these values.
Contextual limitation
Natural language processing (NLP) is one of the crucial mechanisms used in Machine Learning. In NLP, speech and text information are the key means to interpret languages. AI can learn words, letters, syntaxes and sentences.
However, they are unable to process the context of language. The reason is, it is not possible for algorithms to understand in what context this language is being used. In some cases, human intervention becomes inevitable. Therefore, many processes have not yet been fully automated.
Even if AI has natural language processing abilities, it remains restricted to certain areas. AI fails to grow a comprehensive idea about the situation. The mnemonic interpretations limit the scope of AI, and it does not recognise what is happening around.
Data subjected to regulatory restriction
Massive volumes of data are needed while working with ML, particular in stages like cross-validation and training. However, both general and private information comes as a package with this data. This results in various complications, and the data is subjected to regulatory restriction.
Data has been privatised by most of the tech firms. The ML applications find this data useful. However, the process involves risks, as the data could be used for wrong purposes. The concern is greater, when it is used in health insurance, medical research or other sensitive fields.
At times, data is anonymised. However, it is never free from vulnerability. For this reason, data becomes subjected to regulatory restrictions, which may prevent ML to be used to its full potential.
Internal mechanism of deep learning
This particular sub-category or branch of Machine Learning holds the credit behind the AI growth today. Previously, it was merely a theory.
However, it is one of the most significant aspects of ML now. DL (deep learning) helps in powering image recognition, voice recognition and other applications through neural networks, which have been artificially developed.
However, the internal operation of deep learning remains to be solved. Researchers still get baffled about how the advanced DL algorithms work. In deep learning, the neural networks consist of millions of neurons, increasing the abstraction at all the levels, which one cannot fully comprehend. It is for this reason that researchers have dubbed deep learning as a ‘black box’. The internal mechanism still remains to be discovered. This might open up new avenues for its application.
The training process is difficult
Enterprise software, that is traditionally used, is straightforward. The business goals are specific, and you have individual functionalities. Accordingly, the right technology can be used to develop these tools. Developing a working version of this kind of software is a matter of few months.
However, several layers are involved in Machine Learning. The engineers need to generate a program. It takes time for this program to learn the actions they are programmed for. Now, if a few more layers need to be added, it complicates the overall process.
Evidently, it takes more time to develop the ML applications. Particularly, the researchers need to train the respective algorithms. Often, uncertainty looms over the amount of time required to develop the ML programs.
Different sizes of data sets need to be evaluated. It indicates that data scientists and Machine Learning engineers are unable to ensure that the same training model will be replicated in the application.
Conclusion
Although Artificial Intelligence and Machine Learning are not free from flaws, a time is likely to arrive when technology becomes intelligent enough to tackle these issues. Presently, research on intelligent systems is still going on.
Most of the forward-thinking firms are partnering AI-based app developers to bolster their applications and websites. A patient and careful planning can prevent the risks associated with the process and generate high rewards.
Perhaps, more groundwork needs to be done on ML, particularly in understanding deep learning, rather than enhancing the aspect at this moment.The challenges need to be addressed carefully, so that they no longer remain a concern for the business organisations using them.
Aarsh, Co- Founder & COO, Gravitas AI
www.gravitas.bot
0 notes
Text
How can notions of "interaction aesthetics" be significant for interaction design practice?
Introduction
The purpose of this essay is to explore the concept of interaction aesthetics within the context of interaction design. This essay further argues for the value of establishing a common vocabulary for interaction aesthetics while simultaneously highlighting potential issues that might arise from such a system. To support my argument, I draw upon real-world examples and Lenz, Diefenbach & Hassenzahl’s writings on this subject.
The aesthetics of interaction
Up until quite recently the human experience of interacting with a machine or piece of software was shaped mainly by technical constraints and necessities, the possibility space offered to designers was delimited by hard factors such as functionality, cost, size, weight and so forth. However, in recent years the emergence of a vast array of affordable and miniaturized new technologies have to a significant degree untethered the possibilities of interaction from these constraints, resulting in a high degree of freedom in actually designing the aesthetics of an interactive object. It is now possible to create interactions that are not merely functional, but also beautiful and emotionally satisfying in much the same that has long been possible with user interface design. This development in turn makes interaction aesthetics an emergent field worthy of study and discussion. (Lenz, Diefenbach & Hassenzahl, 2014)
Lenz, Diefenbach & Hassenzahl (2013) describe how current attempts to discuss interaction aesthetics tend to focus on specific aspects of an interaction without providing a holistic view. In order to remedy this situation, they propose a kind of standardized vocabulary of interaction aesthetics which describes different types of interactions using different attributes that scale between two extremes, for example slow to fast or direct to mediated. They further categorize these attributes into why, what and how-levels. The why-level focuses on the subjective emotional experience created by the interaction, the what-level describes the actual purpose of the interaction, and the how-level deals with how the interaction is designed.
Two real-world examples
Direct versus mediated
This first protype we created during the course consisted of a virtual humanoid stick figure that a user could move across the screen using one joystick to control each leg. The purpose of the joystick walker was to explore how a user might experience having direct control of a virtual characters limbs, that is to say the direct opposite of the heavily mediated type of movement controls found in many video games. As opposed to simply abstracting movement control into a directional input, this prototype allowed for discrete control of each leg using two separate joysticks. This prototype could be said to invert the conventional “how” of controlling a virtual character in order to explore how this affects the “why”.
Johan Hellgren IDK 17
According to Lenz et al. (2013) mediated interaction creates a sense of remove from the object of interaction, as if the user is merely triggering some action rather than directly creating and controlling it. Direct interaction on the other hand creates a “close relationship between the human and the thing being manipulated.” (ibid, 131)
This prototype demonstrates the value of having access to a vocabulary that both allows a designer to accurately define the nature of an interaction and grants access to its antonym and the wide gradient of possible modes of interaction in between both extremes. This is especially true when it comes to an interaction such as virtual character movement control, where the norm is entrenched to such a degree that it becomes difficult to imagine any other kind of interaction. Simply having access to a pre-defined opposite encourages a designer to expand the scope of their inquiry.
However, this example also raises the issue of the relativity of language and the great extent to which words are interpreted differently based on a person’s individual experience. While our prototype was arguably very direct compared to a traditional movement control interaction where a user would push a single stick to make a character move in a direction and we discussed the design as if we were moving from one extreme to other, it is in hindsight possible to imagine interactions that exist even further out on both sides of the axis. Thus, the scope of the gradient between two extremes expands and contracts based on context and the experience of the persons involved in a design process.
Instant versus delayed
The text-based compass was a phone-based prototype created with the purpose of navigating around places of interest in an urban environment while simultaneously facilitating spontaneous discovery by only showing the direction towards a location as opposed to traditional map applications that give the user precise directions to their destination. The compass rotated in concert with the yaw of the user’s phone, just like a traditional magnet- based compass would as one turn it. However, it further tracks the pitch of the user’s movements and tilts the on-screen content in accordance. However, the navigation system does not actually take the relative elevation of destinations into account, the tilting is purely aesthetic. The rotation on both axes does not precisely track to the movement of the user’s phone, it was intentionally made to gradually interpolate towards the current rotation of the phone. The interaction could thus be said to be both fluent and delayed, but it was the delay that was our main focus.
Instant interaction creates a feeling of physical connection and oneness with the object being interacted with, whereas delayed interaction promotes an awareness of what is happening during the interaction and imbues it with a sense of greater importance, that the interaction itself is something worthy of paying attention to rather than just the result. (Lenz et al., 2013)
As our prototype was intended to promote a sense of slow-paced casual discovery, we attempted to design it to create just the kind of feeling that Lenz et al. (2013) ascribe to a delayed interaction. However, in practice this approach rather ended up creating a feeling of sluggishness and lack of precision for the user, the delayed reaction of the compass led to the impression that it was struggling to lock in on the correct direction. This example demonstrates how it is key to view these terms in a wider context of already existing similar
Johan Hellgren IDK 17
interactive objects, in this case our prototype emphasized attributes that conventional navigation tools try their hardest to minimize and thus generated a sense of performing poorly compared to what a user might be accustomed to.
Conclusion
The above-mentioned examples clearly illustrate the value for designers of establishing a common vocabulary for describing the aesthetics of interactions, especially, as was frequently the case during the course, when intentionally attempting to build knowledge by subverting and working against established norms for common types of digital interaction.
Besides the obvious advantage of engendering a more precise discussion during the design process, adhering to an established vocabulary of interaction also provides a designer with a toolkit of precise terms that allow one to define discrete attributes that can describe both practical and experiential aspects of an interaction and place them on a scale between two extremes, thereby facilitating experimentation and a wider scope of designerly inquiry.
However, any such usage of these terms would still unavoidably be highly contextual and must be interpreted in relation to similar phenomena and the individual experience of both designers and users. For example, while the transitional animations featured in iOS generally last less than half a second and could thus be described as fairly fast they might still be perceived as slow by a user who is accustomed to the even more rapid animations found on Android devices. Therefore, it might be valuable to expand on the vocabulary of interaction aesthetics by attempting to clearly delineate between which terms are absolute and which are relative, e.g. temporally fast versus merely feeling fast. Despite the existence of a common vocabulary for interaction aesthetics, designers must continually make sure that everyone involved has a shared understanding of the terms involved. Lenz et al. (2014) touch upon this problem when they caution against using terminology that does not build upon well-established definitions.
This objection does by no means render the concept of a vocabulary for interaction aesthetics useless, but it is an unavoidable problem that must always be taken into account when making use of such a system. Words are inherently imprecise and highly contextual, but a commonly agreed upon set of terms would still be a significant improvement over a situation where definitions are different between individual designers or at the most agreed upon among a small group.
References
Lenz, E., Diefenbach, S., & Hassenzahl, M. (2014). Aesthetics of interaction. Proceedings of the 8th Nordic Conference on Human-Computer Interaction Fun, Fast, Foundational - NordiCHI 14. doi:10.1145/2639189.2639198
Lenz, E., Diefenbach, S., & Hassenzahl, M. (2013). Exploring relationships between interaction attributes and experience. Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces - DPPI 13. doi: 10.1145/2513506.2513520
0 notes
Text
Original Post from FireEye Author: Philip Tully
Reverse engineers, forensic investigators, and incident responders have an arsenal of tools at their disposal to dissect malicious software binaries. When performing malware analysis, they successively apply these tools in order to gradually gather clues about a binary’s function, design detection methods, and ascertain how to contain its damage. One of the most useful initial steps is to inspect its printable characters via the Strings program. A binary will often contain strings if it performs operations like printing an error message, connecting to a URL, creating a registry key, or copying a file to a specific location – each of which provide crucial hints that can help drive future analysis.
Manually filtering out these relevant strings can be time consuming and error prone, especially considering that:
Relevant strings occur disproportionately less often than irrelevant strings.
Larger binaries can output upwards of tens of thousands of individual strings.
The definition of “relevant” can vary significantly across individual human analysts.
Investigators would never want to miss an important clue that could have reduced their time spent performing the malware analysis, or even worse, led them to draw incomplete or incorrect conclusions. In this blog post, we will demonstrate how the FireEye Data Science (FDS) and FireEye Labs Reverse Engineering (FLARE) teams recently collaborated to streamline this analyst pain point using machine learning.
Highlights
Running the Strings program on a piece of malware inevitably produces noisy strings mixed in with important ones, which can only be uncovered after sifting and scrolling through the entirety of its messy output. FireEye’s new machine learning model that automatically ranks strings based on their relevance for malware analysis speeds up this process at scale.
Knowing which individual strings are relevant often requires highly experienced analysts. Quality, security-relevant labeled training data can be time consuming and expensive to obtain, but weak supervision that leverages the domain expertise of reverse engineers helps accelerate this bottleneck.
Our proposed learning-to-rank model can efficiently prioritize Strings outputs from individual malware samples. On a dataset of relevant strings from over 7 years of malware reports authored by FireEye reverse engineers, it also performs well based on criteria commonly used to evaluate recommendation and search engines.
Background
Each string returned by the Strings program is represented by sequences of 3 characters or more ending with a null terminator, independent of any surrounding context and file formatting. These loose criteria mean that Strings may identify sequences of characters as strings when they are not human-interpretable. For example, if consecutive bytes 0x31, 0x33, 0x33, 0x37, 0x00 appear within a binary, Strings will interpret this as “1337.” However, those ASCII characters may not actually represent that string per se; they could instead represent a memory address, CPU instructions, or even data utilized by the program. Strings leaves it up to the analyst to filter out such irrelevant strings that appear within its output. For instance, only a handful of the strings listed in Figure 1 that originate from an example malicious binary are relevant from a malware analyst’s point of view.
Figure 1: An example Strings output containing 44 strings for a toy sample with a SHA-256 value of eb84360ca4e33b8bb60df47ab5ce962501ef3420bc7aab90655fd507d2ffcedd.
Ranking strings in terms of descending relevance would make an analyst’s life much easier. They would then only need to focus their attention on the most relevant strings located towards the top of the list, and simply disregard everything below. However, solving the task of automatically ranking strings is not trivial. The space of relevant strings is unstructured and vast, and devising finely tuned rules to robustly account for all the possible variations among them would be a tall order.
Learning to Rank Strings Output
This task can instead be formulated in a machine learning (ML) framework called learning to rank (LTR), which has been historically applied to problems like information retrieval, machine translation, web search, and collaborative filtering. One way to tackle LTR problems is by using Gradient Boosted Decision Trees (GBDTs). GBDTs successively learn individual decision trees that reduce the loss using a gradient descent procedure, and ultimately use a weighted sum of every trees’ prediction as an ensemble. GBDTs with an LTR objective function can learn class probabilities to compute each string’s expected relevance, which can then be used to rank a given Strings output. We provide a high-level overview of how this works in Figure 2.
In the initial train() step of Figure 2, over 25 thousand binaries are run through the Strings program to generate training data consisting of over 18 million total strings. Each training sample then corresponds to the concatenated list of ASCII and Unicode strings output by the Strings program on that input file. To train the model, these raw strings are transformed into numerical vectors containing natural language processing features like Shannon entropy and character co-occurrence frequencies, together with domain-specific signals like the presence of indicators of compromise (e.g. file paths, IP addresses, URLs, etc.), format strings, imports, and other relevant landmarks.
Figure 2: The ML-based LTR framework ranks strings based on their relevance for malware analysis. This figure illustrates different steps of the machine learning modeling process: the initial train() step is denoted by solid arrows and boxes, and the subsequent predict() and sort() steps are denoted by dotted arrows and boxes.
Each transformed string’s feature vector is associated with a non-negative integer label that represents their relevance for malware analysis. Labels range from 0 to 7, with higher numbers indicating increased relevance. To generate these labels, we leverage the subject matter knowledge of FLARE analysts to apply heuristics and impose high-level constraints on the resulting label distributions. While this weak supervision approach may generate noise and spurious errors compared to an ideal case where every string is manually labeled, it also provides an inexpensive and model-agnostic way to integrate domain expertise directly into our GBDT model.
Next during the predict() step of Figure 2, we use the trained GBDT model to predict ranks for the strings belonging to an input file that was not originally part of the training data, and in this example query we use the Strings output shown in Figure 1. The model predicts ranks for each string in the query as floating-point numbers that represent expected relevance scores, and in the final sort() step of Figure 2, strings are sorted in descending order by these scores. Figure 3 illustrates how this resulting prediction achieves the desired goal of ranking strings according to their relevance for malware analysis.
Figure 3: The resulting ranking on the strings depicted in both Figure 1 and in the truncated query of Figure 2. Contrast the relative ordering of the strings shown here to those otherwise identical lists.
The predicted and sorted string rankings in Figure 3 show network-based indicators on top of the list, followed by registry paths and entries. These reveal the potential C2 server and malicious behavior on the host. The subsequent output consisting of user-related information is more likely to be benign, but still worthy of investigation. Rounding out the list are common strings like Windows API functions and PE artifacts that tend to raise no red flags for the malware analyst.
Quantitative Evaluation
While it seems like the model qualitatively ranks the above strings as expected, we would like some quantitative way to assess the model’s performance more holistically. What evaluation criteria can we use to convince ourselves that the model generalizes beyond the coverage of our weak supervision sources, and to compare models that are trained with different parameters?
We turn to the recommender systems literature, which uses the Normalized Discounted Cumulative Gain (NDCG) score to evaluate ranking of items (i.e. individual strings) in a collection (i.e. a Strings output). NDCG sounds complicated, but let’s boil it down one letter at a time:
“G” is for gain, which corresponds to the magnitude of each string’s relevance.
“C” is for cumulative, which refers to the cumulative gain or summed total of every string’s relevance.
“D” is for discounted, which divides each string’s predicted relevance by a monotonically increasing function like the logarithm of its ranked position, reflecting the goal of having the most relevant strings ranked towards the top of our predictions.
“N” is for normalized, which means dividing DCG scores by ideal DCG scores calculated for a ground truth holdout dataset, which we obtain from FLARE-identified relevant strings contained within historical malware reports. Normalization makes it possible to compare scores across samples since the number of strings within different Strings outputs can vary widely.
Figure 4: Kernel Density Estimate of NDCG@100 scores for Strings outputs from the holdout dataset. Scores are calculated for the original ordering after simply running the Strings program on each binary (gray) versus the predicted ordering from the trained GBDT model (red).
In practice, we take the first k strings indexed by their ranks within a single Strings output, where the k parameter is chosen based on how many strings a malware analyst will attend to or deem relevant on average. For our purposes we set k = 100 based on the approximate average number of relevant strings per Strings output. NDCG@k scores are bounded between 0 and 1, with scores closer to 1 indicating better prediction quality in which more relevant strings surface towards the top. This measurement allows us to evaluate the predictions from a given model versus those generated by other models and ranked with different algorithms.
To quantitatively assess model performance, we run the strings from each sample that have ground truth FLARE reports though the predict() step of Figure 2, and compare their predicted ranks with a baseline of the original ranking of strings output by Strings. The divergence in distributions of NDCG@100 scores between these two approaches demonstrates that the trained GBDT model learns a useful structure that generalizes well to the independent holdout set (Figure 4).
Conclusion
In this blog post, we introduced an ML model that learns to rank strings based on their relevance for malware analysis. Our results illustrate that it can rank Strings output based both on qualitative inspection (Figure 3) and quantitative evaluation of NDCG@k (Figure 4). Since Strings is so commonly applied during malware analysis at FireEye and elsewhere, this model could significantly reduce the overall time required to investigate suspected malicious binaries at scale. We plan on continuing to improve its NDCG@k scores by training it with more high fidelity labeled data, incorporating more sophisticated modeling and featurization techniques, and soliciting further analyst feedback from field testing.
It’s well known that malware authors go through great lengths to conceal useful strings from analysts, and a potential blind spot to consider for this model is that the utility of Strings itself can be thwarted by obfuscation. However, open source tools like the FireEye Labs Obfuscated Strings Solver (FLOSS) can be used as an in-line replacement for Strings. FLOSS automatically extracts printable strings just as Strings does, but additionally reveals obfuscated strings that have been encoded, packed, or manually constructed on the stack. The model can be readily trained on FLOSS outputs to rank even obfuscated strings. Furthermore, since it can be applied to arbitrary lists of strings, the model could also be used to rank strings extracted from live memory dumps and sandbox runs.
This work represents a collaboration between the FDS and FLARE teams, which together build predictive models to help find evil and improve outcomes for FireEye’s customers and products. If you are interested in this mission, please consider joining the team by applying to one of our job openings.
#gallery-0-5 { margin: auto; } #gallery-0-5 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-5 img { border: 2px solid #cfcfcf; } #gallery-0-5 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Philip Tully Learning to Rank Strings Output for Speedier Malware Analysis Original Post from FireEye Author: Philip Tully Reverse engineers, forensic investigators, and incident responders have an arsenal of tools at their disposal to dissect malicious…
0 notes
Text
Wargame Wednesday: Battle of the Bulge 18th and 62nd Volksgrenadier Divisions, 14th Armored Group and the 106th Infantry Division
Starting Positions
Introduction for this series here. This post discusses the terrain, some items considered during scenario design and a Q&A with the scenario designer.
The 18th Volksgrenadier (VG) Division holds my right flank and was opposed by the 14th Armored Group (AG) and the 422nd Regiment of the ill-fated 106th Infantry Division (ID). The 62nd VG Division is on my left and their jumping off positions are west of the German town of Prum. The armored Führerbegleit Brigade (Führer Escort) is in reserve behind the 18th VG Division, ready to exploit weaknesses in the American line.
Link to a map showing the initial attacks on the 106th ID from Hugh M. Cole’s The Ardennes: Battle of the Bulge.
The image above uses satellite imagery to show the importance of the Losheim Gap on the course of the battle. A larger image, discussion on the starting positions and reason for the blue line are available by clicking on the image or here.
More after the jump.
Terrain
Hugh Cole discusses the battlefield and I’ve selected some of his text for context.
Page 43. The road network:
“The road net in 1944 was far richer than the population and the economic activity of the Ardennes would seem to warrant. This was not the result of military planning, as in the case of the Eifel rail lines, but rather of Belgian and Luxemburgian recognition of the value of automobile tourisme just prior to World War II. All of the main roads had hard surfaces, generally of macadam. Although the road builders tried to follow the more level stretches of the ridge lines or wider valley floors, in many cases the roads twisted sharply and turned on steep grades down into a deep ravine and out again on the opposite side. The bridges were normally built of stone.”
“The normal settlement in the Ardennes was the small village with stone houses and very narrow, winding streets. These villages often constricted the through road to single-lane traffic. Another military feature was the lone farmstead or inn which gave its name to the crossroads at which it stood.”
Pg 46. Geography:
“The geography of the Ardennes leads inevitably to the channelization of large troop movements east to west, will tend to force larger units to “pile up” on each other, and restricts freedom of maneuver once the direction of attack and order of battle are fixed. To a marked degree the military problem posed by the terrain is that of movement control rather than maneuver in the classic sense.”
“What the German planners saw in 1944 was this: the Ardennes could be traversed by large forces even when these were heavily mechanized. An attack from east to west across the massif would encounter initially the greatest obstacles of terrain, but these obstacles would decrease in number as an advance neared the Meuse.”
“This is mountainous country, with much rainfall, deep snows in winter, and raw, harsh winds sweeping across the plateaus. The heaviest rains come in November and December. The mists are frequent and heavy, lasting well into late morning before they break. Precise predictions by the military meteorologist, however, are difficult because the Ardennes lies directly on the boundary between the northwestern and central European climatic regions and thus is affected by the conjuncture of weather moving east from the British Isles and the Atlantic with that moving westward out of Russia. At Stavelot freezing weather averages 112 days a year, at Bastogne 145 days. The structure of the soil will permit tank movement when the ground is frozen, but turns readily to a clayey mire in time of rain. Snowfall often attains a depth of ten to twelve inches in a 24-hour period. Snow lingers for a long time in the Ardennes but-and this is important in recounting the events of 1944-the deep snows come late.”
Game Design Considerations
The Campaign Series game engine allows a change of visibility on a turn by turn basis. In practice, most designers keep visibility the same for the whole scenario but in this scenario, visibility changes on a daily basis (every 6 turns). My preference would be greater granularity of visibility throughout the six turn day (e.g. fog in the mornings) but fog can be localized, especially in valleys and gullies but visibility settings are universal across the map.
Changing road and field conditions are harder to emulate. In this scenario, snow covers the ground throughout the game but conditions during the battle changed from mud to snow to frozen ground and back to mud. It is possible to change the ground conditions but that would require every player to manually update a game file. In the interests of playability, snow stays on the ground throughout.
A scenario designer always has to weigh the trade offs between realism, playability and the constraints of the game engine. Changes to one aspect can have second or third order effects on the others. For this scenario, the designer has come up with the following compromises:
Realism
Changing visibility on a day by day basis, an improvement over fog throughout the entire game.
6 turns per day. Lots of controversy in the Campaign Series world over how much time one turn represents. Over the years I have developed the following rule of thumb in that for smaller scenarios (up to the battalion level) each turn can account for a smaller period of time, even down to 20 to 30 minutes but for larger scenarios, 6 to 8 turns can equal the historical pace of an offensive. Without getting into a long discussion the bottom line is many daylight hours are spent in coordination, resupply, regrouping, taking cover, etc. Some days it takes a while to motivate oneself to go job. Imagine the time needed to motivate a squad or platoon to charge a machine gun nest.
Playability
Elected to keep the terrain as snow throughout. Not very historical but by changing movement rates any game balance achieved in this version will be thrown off.
Scenario start at first light. A lot of dramatic action was missed (initial German artillery bombardment; German searchlights illuminating the battlefield by reflection from low lying clouds; some Volksgrenadiers caught advancing in the open because of that illumination, and initial engagement with the 14th Armored Group at Krewinke) but the way night combat is simulated by the game engine was unsatisfactory.
Subject for next week’s post and the OOBs but the organizations are pared down a little. Example, battalion level HQs are not included in the game along with smaller caliber mortars (especially the American 61mm mortar which doesn’t make the trade off between game management and effectiveness).
Q&A with the Scenario Designer
Scott Cole: How long have you been working on this scenario?
Von Earlmann: I guess about 10 years or so to include my first modding attempts on the original East Front.
SC: How long did it take to create the map?
VE: The map was a long process as I started with a smaller version and kept adding to it as the scenario grew in my mind.
SC: What was your process for map creation (e.g. which sources did you use)?
VE: I actually found a complete set of battle maps for the whole Ardennes offensive at a lawn sale years ago which was what gave me the initial idea for this monster scenario as it had one map just for the V Panzer AOR (of course, I made the mistake of lending them to someone and they are now long gone). Also, used a lot of the maps and descriptions from the book “A Time for Trumpets“. The last expansion came from maps that Huib (Note: another master scenario designer) sent me from the actual area. I used them for a lot of the terrain and distances. I never did have topographic maps with the elevations so had to wing that but, figured it was the same map for both sides and does depict the toughness of the area to fight in.
SC: What was your philosophy for the OOBs?
VE: The main thing with OOB’s in a large scenario is reducing the number of HQs for smoother supply purposes. I simply eliminated most of the battalion HQs and have the entire regiment trace to regimental HQ by moving the platoons directly under regimental or brigade HQs (note: this is done in a separate OOB file unique to the scenario). It takes a lot of renaming but makes for better command and control and smoother supply.
I usually take out a lot of the smaller indirect fire units such as infantry guns, 81mm mortars and lower and things like machine gun sections. There is nothing that will ruin any large scenario more than watching replays with all those small units firing (Note: usually to no effect, though a direct hit from 81mm mortars can quickly change your plans for the day). In each scenario there is more than enough artillery to make it realistic, especially in this scenario as the Americans have plenty of artillery units (Note: I can attest to that….).
SC: What does this scenario start? I’m guessing after the pre-dawn night combat. For example, the 18th VG Regiment starts west of Krewinkle.
VE: Again, I used some poetic license to place units at start positions. It was a bit easier with the Germans as their forces divide up well over the three map sections. The Americans were a bit tougher as one of the divisions was spread over two sectors (Note: the 106th has some platoons in the center sector). Doing this represents how thin the Americans were spread across the front line.
The scenario beings with actions from 16 through 31 December. I used that time frame to seat weather, reinforcements, unit releases, etc. The basis is one day equals six turns. I know the game is supposed to be 6 minute turns but that is a discussion for another time and seems to work in this scenario.
Next Week
I’ll go over the OOBs for all units mentioned in today’s post and also will discuss the fighting at Krewinkle as the scenario starts after this engagement.
References and Links
The Ardennes: Battle of the Bulge Hugh M. Cole
14th Cavalry in the Losheim Gap
Project 1944. Military historians practicing “living history”.
Index
Intro Bulge Series Post.
Weather chart.
Wargame Wednesday: Battle of the Bulge 18th and 62nd Volksgrenadier Divisions, 14th Armored Group and the 106th Infantry Division published first on https://medium.com/@ReloadedPCGames
0 notes
Link
Next week professional services firm Accenture will be launching a new tool to help its customers identify and fix unfair bias in AI algorithms. The idea is to catch discrimination before it gets baked into models and can cause human damage at scale.
The “AI fairness tool”, as it’s being described, is one piece of a wider package the consultancy firm has recently started offering its customers around transparency and ethics for machine learning deployments — while still pushing businesses to adopt and deploy AI. (So the intent, at least, can be summed up as: ‘Move fast and don’t break things’. Or, in very condensed corporate-speak: “Agile ethics”.)
“Most of last year was spent… understanding this realm of ethics and AI and really educating ourselves, and I feel that 2018 has really become the year of doing — the year of moving beyond virtue signaling. And moving into actual creation and development,” says Rumman Chowdhury, Accenture’s responsible AI lead — who joined the company when the role was created, in January 2017.
“For many of us, especially those of us who are in this space all the time, we’re tired of just talking about it — we want to start building and solving problems, and that’s really what inspired this fairness tool.”
Chowdhury says Accenture is defining fairness for this purpose as “equal outcomes for different people”.
“There is no such thing as a perfect algorithm,” she says. “We know that models will be wrong sometimes. We consider it unfair if there are different degrees of wrongness… for different people, based on characteristics that should not influence the outcomes.”
She envisages the tool having wide application and utility across different industries and markets, suggesting early adopters are likely those in the most heavily regulated industries — such as financial services and healthcare, where “AI can have a lot of potential but has a very large human impact”.
“We’re seeing increasing focus on algorithmic bias, fairness. Just this past week we’ve had Singapore announce an AI ethics board. Korea announce an AI ethics board. In the US we already have industry creating different groups — such as The Partnership on AI. Google just released their ethical guidelines… So I think industry leaders, as well as non-tech companies, are looking for guidance. They are looking for standards and protocols and something to adhere to because they want to know that they are safe in creating products.
“It’s not an easy task to think about these things. Not every organization or company has the resources to. So how might we better enable that to happen? Through good legislation, through enabling trust, communication. And also through developing these kinds of tools to help the process along.”
The tool — which uses statistical methods to assess AI models — is focused on one type of AI bias problem that’s “quantifiable and measurable”. Specifically it’s intended to help companies assess the data sets they feed to AI models to identify biases related to sensitive variables and course correct for them, as it’s also able to adjust models to equalize the impact.
To boil it down further, the tool examines the “data influence” of sensitive variables (age, gender, race etc) on other variables in a model — measuring how much of a correlation the variables have with each other to see whether they are skewing the model and its outcomes.
It can then remove the impact of sensitive variables — leaving only the residual impact say, for example, that ‘likelihood to own a home’ would have on a model output, instead of the output being derived from age and likelihood to own a home, and therefore risking decisions being biased against certain age groups.
“There’s two parts to having sensitive variables like age, race, gender, ethnicity etc motivating or driving your outcomes. So the first part of our tool helps you identify which variables in your dataset that are potentially sensitive are influencing other variables,” she explains. “It’s not as easy as saying: Don’t include age in your algorithm and it’s fine. Because age is very highly correlated with things like number of children you have, or likelihood to be married. Things like that. So we need to remove the impact that the sensitive variable has on other variables which we’re considering to be not sensitive and necessary for developing a good algorithm.”
Chowdhury cites an example in the US, where algorithms used to determine parole outcomes were less likely to be wrong for white men than for black men. “That was unfair,” she says. “People were denied parole, who should have been granted parole — and it happened more often for black people than for white people. And that’s the kind of fairness we’re looking at. We want to make sure that everybody has equal opportunity.”
However, a quirk of AI algorithms is that when models are corrected for unfair bias there can be a reduction in their accuracy. So the tool also calculates the accuracy of any trade-off to show whether improving the model’s fairness will make it less accurate and to what extent.
Users get a before and after visualization of any bias corrections. And can essentially choose to set their own ‘ethical bar’ based on fairness vs accuracy — using a toggle bar on the platform — assuming they are comfortable compromising the former for the latter (and, indeed, comfortable with any associated legal risk if they actively select for an obviously unfair tradeoff).
In Europe, for example, there are rules that place an obligation on data processors to prevent errors, bias and discrimination in automated decisions. They can also be required to give individuals information about the logic of an automated decision that effects them. So actively choosing a decision model that’s patently unfair would invite a lot of legal risk.
While Chowdhury concedes there is an accuracy cost to correcting bias in an AI model, she says trade-offs can “vary wildly”. “It can be that your model is incredibly unfair and to correct it to be a lot more fair is not going to impact your model that much… maybe by 1% or 2% [accuracy]. So it’s not that big of a deal. And then in other cases you may see a wider shift in model accuracy.”
She says it’s also possible the tool might raise substantial questions for users over the appropriateness of an entire data-set — essentially showing them that a data-set is “simply inadequate for your needs”.
“If you see a huge shift in your model accuracy that probably means there’s something wrong in your data. And you might need to actually go back and look at your data,” she says. “So while this tool does help with corrections it is part of this larger process — where you may actually have to go back and get new data, get different data. What this tool does is able to highlight that necessity in a way that’s easy to understand.
“Previously people didn’t have that ability to visualize and understand that their data may actually not be adequate for what they’re trying to solve for.”
She adds: “This may have been data that you’ve been using for quite some time. And it may actually cause people to re-examine their data, how it’s shaped, how societal influences influence outcomes. That’s kind of the beauty of artificial intelligence as a sort of subjective observer of humanity.”
While tech giants may have developed their own internal tools for assessing the neutrality of their AI algorithms — Facebook has one called Fairness Flow, for example — Chowdhury argues that most non-tech companies will not be able to develop their own similarly sophisticated tools for assessing algorithmic bias.
Which is where Accenture is hoping to step in with a support service — and one that also embeds ethical frameworks and toolkits into the product development lifecycle, so R&D remains as agile as possible.
“One of the questions that I’m always faced with is how do we integrate ethical behavior in way that aligns with rapid innovation. So every company is really adopting this idea of agile innovation and development, etc. People are talking a lot about three to six month iterative processes. So I can’t come in with an ethical process that takes three months to do. So part of one of my constraints is how do I create something that’s easy to integrate into this innovation lifecycle.”
One specific draw back is that currently the tool has not been verified working across different types of AI models. Chowdhury says it’s principally been tested on models that use classification to group people for the purposes of building AI models, so it may not be suitable for other types. (Though she says their next step will be to test it for “other kinds of commonly used models”.)
More generally, she says the challenge is that many companies are hoping for a magic “push button” tech fix-all for algorithmic bias. Which of course simply does not — and will not — exist.
“If anything there’s almost an overeagerness in the market for a technical solution to all their problems… and this is not the case where tech will fix everything,” she warns. “Tech can definitely help but part of this is having people understand that this is an informational tool, it will help you, but it’s not going to solve all your problems for you.”
The tool was co-prototyped with the help of a data study group at the UK’s Alan Turing Institute, using publicly available data-sets.
During prototyping, when the researchers were using a German data-set relating to credit risk scores, Chowdhury says the team realized that nationality was influencing a lot of other variables. And for credit risk outcomes they found decisions were more likely to be wrong for non-German nationals.
They then used the tool to equalize the outcome and found it didn’t have a significant impact on the model’s accuracy. “So at the end of it you have a model that is just as accurate as the previous models were in determining whether or not somebody is a credit risk. But we were confident in knowing that one’s nationality did not have undue influence over that outcome.”
A paper about the prototyping of the tool will be made publicly available later this year, she adds.
from TechCrunch https://ift.tt/2LD0Vmk
0 notes